Skip to content

SJPGMP/TimeCapsule-SLM

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

TimeCapsule-SLM Logo

๐Ÿ’Š TimeCapsule-SLM

Complete AI-powered research & creative platform with DeepResearch

Generate Novel Ideas โ€ข Build AI Content โ€ข Enable Collaborative Knowledge Discovery

Live Demo License AI Powered Made with Love


๐Ÿ”— Quick Links


๐Ÿš€ **Key Features **

๐Ÿง  In-Browser RAG | ๐Ÿ”— TimeCapsule Sharing | ๐Ÿ“š Knowledge Base | ๐Ÿค– Local LLM Support

๐Ÿง  In-Browser RAG

Semantic search and Retrieval-Augmented Generation with your own documentsโ€”directly in your browser. No server, no data leaves your device.

๐Ÿ“š Knowledge Base Integration

Upload PDFs, text, and images to build a private, searchable research knowledge base with intelligent document analysis.

๐Ÿ”— TimeCapsule Sharing

Export and load full research sessions as .timecapsule.json files. Instantly share, restore, or collaborate on research with anyone.

๐Ÿค– Local LLM Support

Use Ollama, LM Studio, OpenAI, and Anthropicโ€”privacy-first, cost-effective, and lightning fast.


๐ŸŒ โœจ Experience Live Now! โœจ

๐Ÿš€ Instant Access

No downloads, no setup - just click and create! Professional-grade research and creative coding in your browser.

๐Ÿค– AI-Powered

Ollama, LM Studio & API (OpenAI, Anthropic) integration for intelligent research analysis and creative code generation.

๐Ÿ’Š What You Get:

๐Ÿ”ฌ DeepResearch TimeCapsule - Comprehensive AI-powered research platform
๐ŸŽฎ Playground - Execute TimeCapsules with creative coding
๐Ÿง  Triple AI Mode: Ollama, LM Studio and APIs (OpenAI, Anthropic)
โš™๏ธ Custom Context Templates for personalized AI behavior
๐Ÿ“ฑ Responsive Design that works on all devices
๐Ÿ”„ Seamless Navigation between research and creative modes
๐Ÿ”’ Privacy First with multiple local AI options


๐Ÿšฆ How to Start

๐ŸŽฏ Get Research-Ready in 5 Minutes

๐ŸŒ Option 1: Instant Online (Recommended)

  1. ๐ŸŽฏ Go to timecapsule.bubblspace.com
  2. ๐Ÿ”ฌ Click "DeepResearch"
  3. ๐Ÿฆ™ Start Ollama (see setup below)
  4. ๐Ÿค– Pull a model: ollama pull qwen3:0.6b
  5. ๐Ÿ“š Add documents in Knowledge Manager
  6. ๐Ÿ“ Add research topics and click Generate

๐Ÿณ Option 2: Docker (Easy Deploy)

  1. ๐Ÿ“ Clone: git clone https://github.com/thefirehacker/TimeCapsule-SLM
  2. ๐Ÿ“‚ Navigate: cd TimeCapsule-SLM
  3. ๐Ÿณ Start: docker-compose --profile ai-enabled up -d
  4. ๐ŸŒ Access: http://localhost:3000
  5. ๐Ÿฆ™ Pull model: docker exec timecapsule-ollama ollama pull qwen3:0.6b
  6. ๐Ÿš€ Start researching!

๐Ÿ’ป Option 3: Local Development

  1. ๐Ÿ“ Clone: git clone https://github.com/thefirehacker/TimeCapsule-SLM
  2. ๐Ÿ“‚ Navigate: cd TimeCapsule-SLM
  3. ๐ŸŒ Open: DeepResearch.html in browser
  4. ๐Ÿฆ™ Setup Ollama (see integration guide)
  5. ๐Ÿš€ Start researching!

๐Ÿฆ™ Quick Ollama Setup (Essential for local AI)

# 1. Install Ollama from https://ollama.ai

# 2. Pull recommended model
ollama pull qwen3:0.6b

# 3. Start with CORS enabled (CRITICAL)
OLLAMA_ORIGINS="https://timecapsule.bubblspace.com/,http://localhost:3000" ollama serve

# 4. Connect in TimeCapsule-SLM

๐Ÿ’ก Pro Tip: For best results, use Ollama with the qwen3:0.6b model. LM Studio and APIs (OpenAI, Anthropic) are also fully supported.


๐Ÿ”— How to Share

๐Ÿค Collaborate & Share Research Instantly

๐Ÿ“ค Export TimeCapsule

  1. ๐Ÿ”ฌ Complete your research in DeepResearch
  2. ๐Ÿ’พ Click "Export TimeCapsule" button
  3. ๐Ÿ“ Save .timecapsule.json file
  4. ๐Ÿค Share with colleagues or save for later

Perfect for: Research collaboration, session backup, knowledge sharing

๐Ÿ“ฅ Load TimeCapsule

  1. ๐Ÿ“‚ Click "Load TimeCapsule" button
  2. ๐Ÿ—‚๏ธ Select .timecapsule.json file
  3. โšก Instantly restore topics and research output
  4. ๐Ÿ”„ Continue where you left off

Perfect for: Resuming sessions, importing shared research, team collaboration

๐ŸŽฏ TimeCapsule Features

  • ๐Ÿ”„ Complete Session Restore - All topics, research results, and notes
  • ๐Ÿ“Š Multi-Tab Support - Research, Sources, and Notes tabs preserved
  • ๐Ÿค Team Collaboration - Share research across teams instantly
  • ๐Ÿ’พ Session Backup - Never lose your research progress
  • ๐ŸŒ Cross-Platform - Works on any device with TimeCapsule-SLM

๐Ÿฆ™ Ollama Integration

๐ŸŽฏ Local AI Power + Privacy First

Complete platform-specific setup guides for macOS, Linux & Windows

๐Ÿš€ Why Ollama?

  • ๐Ÿ”’ Fully Private - All processing happens locally
  • ๐Ÿ’ฐ Zero API Costs - No usage fees or limits
  • โšก Lightning Fast - Optimized GGUF models
  • ๐ŸŽ›๏ธ Model Library - Easy model management
  • ๐ŸŒ REST API - Simple integration

๐Ÿ› ๏ธ Setup Requirements

  • Ollama App - Download from ollama.ai
  • AI Model - Any compatible GGUF model
  • CORS Enabled - CRITICAL for web access
  • Port 11434 - Default Ollama server port

๐ŸŽ macOS Setup Guide

๐Ÿ“ฅ Step 1: Install Ollama

# Method 1: Direct download (recommended)
# Download from https://ollama.ai and install .app

# Method 2: Homebrew
brew install ollama

๐Ÿค– Step 2: Pull a Model

# Recommended: Fast and efficient
ollama pull qwen3:0.6b

๐Ÿ”ง Step 3: Start with CORS (CRITICAL)

# Kill any existing processes first
pkill -f ollama

# Start with CORS enabled (for testing)
OLLAMA_ORIGINS="*" ollama serve

# For production (recommended)
OLLAMA_ORIGINS="https://timecapsule.bubblspace.com/,http://localhost:3000" ollama serve

๐Ÿ”ง macOS Troubleshooting

โŒ "Operation not permitted" Error:

# Method 1: Use sudo
sudo pkill -f ollama

# Method 2: Activity Monitor (GUI)
# 1. Open Activity Monitor (Applications โ†’ Utilities)
# 2. Search for "ollama"
# 3. Select process and click "Force Quit"

# Method 3: Homebrew service (if installed via brew)
brew services stop ollama
brew services start ollama

โŒ CORS Issues:

# 1. Stop Ollama completely
sudo pkill -f ollama

# 2. Wait 3 seconds
sleep 3

# 3. Start with CORS
OLLAMA_ORIGINS="*" ollama serve

# 4. Test connection
curl http://localhost:11434/api/tags

๐Ÿง Linux Setup Guide

๐Ÿ“ฅ Step 1: Install Ollama

# Official installer (recommended)
curl -fsSL https://ollama.ai/install.sh | sh

# Or download directly from https://ollama.ai

๐Ÿค– Step 2: Pull a Model

# Recommended model
ollama pull qwen3:0.6b

๐Ÿ”ง Step 3: Configure CORS with systemctl (CRITICAL)

For systemd-based Linux distributions (Ubuntu, Debian, CentOS, etc.):

# 1. Stop any running Ollama instances
ps aux | grep ollama
sudo pkill -f ollama

# 2. Edit the ollama service configuration
sudo systemctl edit ollama.service

# 3. Add the following environment variables:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"

# For production, use specific origins:
# Environment="OLLAMA_ORIGINS=https://timecapsule.bubblspace.com/,http://localhost:3000"

# 4. Save and exit the editor (Ctrl+X, then Y, then Enter)

# 5. Reload systemd and restart ollama service
sudo systemctl daemon-reload
sudo systemctl restart ollama.service

# 6. Enable auto-start on boot (optional)
sudo systemctl enable ollama.service

# 7. Verify the service is running
sudo systemctl status ollama.service

# 8. Test the connection
curl http://localhost:11434/api/tags

Alternative: Manual start (if not using systemd):

# Stop any existing processes
sudo pkill -f ollama

# Start manually with CORS
OLLAMA_ORIGINS="*" ollama serve

# Or for production:
# OLLAMA_ORIGINS="https://timecapsule.bubblspace.com/,http://localhost:3000" ollama serve

๐Ÿ”ง Linux Troubleshooting

โŒ Service Issues:

# Check service logs
sudo journalctl -u ollama.service -f

# Restart service
sudo systemctl restart ollama.service

# Check service status
sudo systemctl status ollama.service

โŒ Permission Issues:

# Stop with elevated permissions
sudo pkill -f ollama

# Check for lingering processes
ps aux | grep ollama

# Force kill if needed
sudo kill -9 $(pgrep ollama)

โŒ CORS Configuration:

# Verify environment variables are set
sudo systemctl show ollama.service | grep Environment

# If not set, re-edit the service:
sudo systemctl edit ollama.service
# Add Environment variables as shown above
sudo systemctl daemon-reload
sudo systemctl restart ollama.service

๐Ÿ“š Reference: Ollama CORS Configuration Guide


๐ŸชŸ Windows Setup Guide

๐Ÿ“ฅ Step 1: Install Ollama

# Download from https://ollama.ai and install the .exe
# Or use package manager (if available)

๐Ÿค– Step 2: Pull a Model

# Open Command Prompt or PowerShell
ollama pull qwen3:0.6b

๐Ÿ”ง Step 3: Start with CORS (CRITICAL)

# Method 1: Stop existing processes
taskkill /f /im ollama.exe

# Method 2: Start with CORS (Command Prompt)
set OLLAMA_ORIGINS=* && ollama serve

# Method 3: Start with CORS (PowerShell)
$env:OLLAMA_ORIGINS="*"; ollama serve

# For production (specific origins):
# $env:OLLAMA_ORIGINS="https://timecapsule.bubblspace.com/,http://localhost:3000"; ollama serve

๐Ÿ”ง Windows Troubleshooting

โŒ Process Issues:

# Method 1: Task Manager (GUI)
# 1. Open Task Manager (Ctrl+Shift+Esc)
# 2. Look for "ollama.exe" in Processes tab
# 3. Right-click and select "End task"

# Method 2: Command line
taskkill /f /im ollama.exe

# Method 3: Find by port
netstat -ano | findstr :11434
# Note the PID and kill it:
taskkill /f /pid <PID>

โŒ CORS Issues:

# 1. Stop all ollama processes
taskkill /f /im ollama.exe

# 2. Wait 3 seconds
timeout /t 3

# 3. Start with CORS
$env:OLLAMA_ORIGINS="*"; ollama serve

# 4. Test connection (if curl is available)
curl http://localhost:11434/api/tags

โŒ Environment Variables:

# Set permanently (requires restart)
setx OLLAMA_ORIGINS "*"

# Set for current session only
$env:OLLAMA_ORIGINS="*"

๐ŸŽฏ Universal Commands & Verification

๐Ÿงช Test Your Setup

# 1. Check if Ollama is running
curl http://localhost:11434/api/tags

# 2. List installed models
ollama list

# 3. Test model response
curl http://localhost:11434/api/generate -d '{
  "model": "qwen3:0.6b",
  "prompt": "Hello",
  "stream": false
}'

๐Ÿ“ฆ Recommended Models

Model Size Best For Performance
qwen3:0.6b ~400MB Fast responses, testing ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸโญ
qwen2.5:3b ~2GB Balanced quality/speed ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
llama3.2:3b ~2GB General purpose ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸโญโญ
# Pull additional models:
ollama pull qwen2.5:3b
ollama pull llama3.2:3b

๐Ÿ†˜ Universal Reset (All Platforms)

If everything fails, complete reset:

# 1. Stop all Ollama processes
# macOS/Linux: sudo pkill -f ollama
# Windows: taskkill /f /im ollama.exe

# 2. Wait 5 seconds
sleep 5  # macOS/Linux
# timeout /t 5  # Windows

# 3. Start fresh with CORS
OLLAMA_ORIGINS="*" ollama serve
# Windows PowerShell: $env:OLLAMA_ORIGINS="*"; ollama serve

# 4. Pull a model (in new terminal)
ollama pull qwen3:0.6b

# 5. Test setup
curl http://localhost:11434/api/tags

๐Ÿ’ก Pro Tips:

  • Linux Users: Use systemctl for persistent CORS configuration
  • macOS Users: Use Activity Monitor for stubborn processes
  • Windows Users: Use Task Manager or PowerShell for process management
  • All Platforms: Use OLLAMA_ORIGINS="*" for testing, then restrict to specific domains
  • Always verify your setup with: curl http://localhost:11434/api/tags

๐ŸŒ Custom Ollama URL (Local Builds Only)

โš ๏ธ Note: Custom URLs only work in local builds, not hosted version.

Easy Setup with ollama-custom.js:

  1. Edit Configuration File: Open ollama-custom.js in the root directory
  2. Add Your IPs: Replace the example IPs with your actual Ollama servers
    customIPs: [
      "http://10.0.1.69:11434",      // Your first Ollama server
      "http://192.168.1.200:11434",  // Your second Ollama server  
      "http://172.16.0.50:9434"      // Your third Ollama server
    ]
  3. Save and Refresh: Save the file and hard refresh your browser (Ctrl+Shift+R)

Use in App: Click "Connect Ollama" โ†’ accept agreement โ†’ Enter custom URL in popup

  • DeepResearch: Click "๐Ÿฆ™ Connect Ollama"
  • Playground: Click "Connect AI" โ†’ Select Ollama

Examples: http://192.168.1.100:11434, http://localhost:9434, https://ollama.mydomain.com


๐Ÿ  LM Studio Integration

๐ŸŽฏ Local AI Power + No API Costs

๐Ÿš€ Why LM Studio?

  • ๐Ÿ”’ Fully Private - All processing happens locally
  • ๐Ÿ’ฐ Zero API Costs - No usage fees or limits
  • โšก Fast Response - Direct local connection
  • ๐ŸŽ›๏ธ Model Control - Use any compatible model
  • ๐ŸŒ OpenAI Compatible - Standard API format

๐Ÿ› ๏ธ Setup Requirements

  • LM Studio App - Download from lmstudio.ai
  • Compatible Model - Any chat-capable model
  • Local Server - LM Studio server on port 1234
  • CORS Enabled - Allow cross-origin requests

๐Ÿ“‹ Quick Setup Guide

๐Ÿšจ KEY REQUIREMENT: You MUST enable CORS in LM Studio for TimeCapsule-SLM to connect.

Step 1: ๐Ÿ“ฅ Download LM Studio from lmstudio.ai and install it
Step 2: ๐Ÿค– Download a Model - Search for models like Qwen3 0.6B Step 3: ๐Ÿš€ Start Local Server - Click "Start Server" in LM Studio (port 1234)
Step 4: โš™๏ธ Enable CORS - IMPORTANT: In LM Studio โ†’ Settings โ†’ Server โ†’ Enable "CORS"
Step 5: ๐Ÿ”„ Restart Server - Stop and restart the LM Studio server
Step 6: ๐Ÿ’Š Connect in TimeCapsule - Select "๐Ÿ  LM Studio" from AI provider dropdown
Step 7: ๐Ÿ”Œ Click Connect - TimeCapsule will auto-detect your model

๐ŸŽฏ Recommended Models

Model Size Best For Performance
Qwen3 0.6B ~500MB Research analysis, detailed coding responses ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ

๐Ÿ”ฌ DeepResearch TimeCapsule Usage

๐Ÿง  AI-Powered Research Generation

๐Ÿ“Š Research Workflow

  1. ๐Ÿ“ Add Topics - Define research areas with descriptions
  2. ๐ŸŽฏ Select Type - Choose from Academic, Market, Technology, Competitive, Trend, Literature
  3. ๐Ÿ“ Set Depth - Pick Overview, Detailed, or Comprehensive analysis
  4. ๐Ÿค– Generate Research - AI creates structured, professional reports and TimeCapsules
  5. ๐Ÿ“ค Export Results - Download as .timecapsule.json files for sharing

๐ŸŽฏ Research Types Explained

  • ๐Ÿ“š Academic - Scholarly analysis with citations and methodology
  • ๐Ÿ“ˆ Market - Industry trends, competition, and market analysis
  • ๐Ÿ”ง Technology - Technical deep-dives and implementation insights
  • ๐Ÿข Competitive - Competitor analysis and market positioning
  • ๐Ÿ“Š Trend - Emerging patterns and future predictions
  • ๐Ÿ“– Literature - Comprehensive literature reviews and surveys

๐ŸŽฎ Playground Usage

๐ŸŽจ Creative Coding with AI Assistance


๐Ÿ“„ Project Structure

๐Ÿ“ Clean, Organized Architecture

๐Ÿ’Š Core Application Files

File Description
DeepResearch.html DeepResearch TimeCapsule studio
Playground.html Playground creative AI environment
Canvas.html Creative CodingEnvironment
index.html Main platform homepage
Script01.js Utility functions and helpers

๐Ÿ“‹ Documentation & Assets

File Description
README.md This comprehensive guide
CREDITS Algorithm attributions
LICENSE Apache 2.0 License
lib/ Assets and design templates

๐Ÿ“‚ Library Structure

  • lib/agent/ - Canvas AI agents
  • lib/AIAssistant/ - AI backend integration
  • lib/Designs/ - Creative coding templates
  • lib/Pages/ - Component libraries
  • lib/Media/ - Images and assets

๐Ÿš€ Powered By

BubblSpace

Proudly Supported by BubblSpace

Building the future of AI-powered creativity and collaboration

Visit BubblSpace


Typing SVG

๐Ÿง™โ€โ™‚๏ธ Created with โค๏ธ by FireHacker

๐ŸŒ Made for researchers, creators, developers, and digital artists worldwide

Twitter Follow

๐Ÿฆ Follow @thefirehacker

GitHub stars

โญ STAR THIS PROJECT โญ

Help us reach 100 stars!

Discord

๐ŸŽฎ Discord Community


๐Ÿ’ฌ Support & Community

Support Typing SVG

๐ŸŽง Discord Community

Discord

Real-time help & discussions
Connect with fellow researchers!

๐Ÿ’ฌ discord.gg/ExQ8fCv9

๐Ÿ“ง Email Support

Email

Direct technical assistance
Professional support team

๐Ÿ“ง support@bubblspace.com

๐Ÿ› Report Issues

GitHub Issues

Bug reports & feature requests
Help improve the platform

๐Ÿ”ง GitHub Issues

๐Ÿ“š Documentation

Docs

Complete guides & tutorials
Everything you need to know

๐Ÿ“– View Docs โ€ข ๐Ÿณ Docker

๐Ÿ†˜ Get Help With:

๐Ÿ”ง Setup & Installation โ€ข ๐Ÿค– AI Integration โ€ข ๐Ÿ”ฌ Research Workflows โ€ข ๐Ÿ“š Document Management โ€ข ๐ŸŽฎ Creative Coding โ€ข ๐Ÿ”„ TimeCapsule Sharing โ€ข ๐Ÿ› Troubleshooting


โญ LOVE THIS PROJECT? GIVE IT A STAR! โญ

GitHub stars

๐ŸŽฏ Help us reach 100 stars and unlock new features!

Star on GitHub

โšก Just one click makes a huge difference!

๐Ÿ™ Your star helps more developers discover TimeCapsule-SLM and supports continued development!

๐Ÿค Join Our Growing Community

๐ŸŒŸ Star Gazers
Join our amazing community of developers

GitHub stars

View Stargazers

๐Ÿด Contributors
Be part of the development journey

Contributors

Contribute

๐Ÿ“ˆ Project Stats
Growing stronger every day

GitHub Activity

Issues & PRs


๐Ÿ’ซ Thank you for being part of the TimeCapsule-SLM community! Together, we're revolutionizing AI-powered research and creativity. ๐Ÿ’ซ

About

AI creative coding studio Deepresearch , blogs , Animation all in browser full privacy.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 62.4%
  • HTML 36.7%
  • Other 0.9%