- ๐ Key Features
- ๐ Experience Live Now
- ๐ฆ How to Start
- ๐ณ Docker Support- In Progress
- ๐ How to Share
- ๐ฌ Join Discord Community
- ๐ฆ Ollama Integration
- ๐ LM Studio Integration
- ๐ฌ DeepResearch Usage
- ๐ฎ Playground Usage
- ๐ Project Structure
- ๐ Powered By
|
Semantic search and Retrieval-Augmented Generation with your own documentsโdirectly in your browser. No server, no data leaves your device. Upload PDFs, text, and images to build a private, searchable research knowledge base with intelligent document analysis. |
Export and load full research sessions as Use Ollama, LM Studio, OpenAI, and Anthropicโprivacy-first, cost-effective, and lightning fast. |
|
๐ Instant Access No downloads, no setup - just click and create! Professional-grade research and creative coding in your browser. |
๐ค AI-Powered Ollama, LM Studio & API (OpenAI, Anthropic) integration for intelligent research analysis and creative code generation. |
๐ฌ DeepResearch TimeCapsule - Comprehensive AI-powered research platform
๐ฎ Playground - Execute TimeCapsules with creative coding
๐ง Triple AI Mode: Ollama, LM Studio and APIs (OpenAI, Anthropic)
โ๏ธ Custom Context Templates for personalized AI behavior
๐ฑ Responsive Design that works on all devices
๐ Seamless Navigation between research and creative modes
๐ Privacy First with multiple local AI options
|
|
|
# 1. Install Ollama from https://ollama.ai
# 2. Pull recommended model
ollama pull qwen3:0.6b
# 3. Start with CORS enabled (CRITICAL)
OLLAMA_ORIGINS="https://timecapsule.bubblspace.com/,http://localhost:3000" ollama serve
# 4. Connect in TimeCapsule-SLM๐ก Pro Tip: For best results, use Ollama with the
qwen3:0.6bmodel. LM Studio and APIs (OpenAI, Anthropic) are also fully supported.
Perfect for: Research collaboration, session backup, knowledge sharing |
Perfect for: Resuming sessions, importing shared research, team collaboration |
- ๐ Complete Session Restore - All topics, research results, and notes
- ๐ Multi-Tab Support - Research, Sources, and Notes tabs preserved
- ๐ค Team Collaboration - Share research across teams instantly
- ๐พ Session Backup - Never lose your research progress
- ๐ Cross-Platform - Works on any device with TimeCapsule-SLM
Complete platform-specific setup guides for macOS, Linux & Windows
|
|
# Method 1: Direct download (recommended)
# Download from https://ollama.ai and install .app
# Method 2: Homebrew
brew install ollama# Recommended: Fast and efficient
ollama pull qwen3:0.6b# Kill any existing processes first
pkill -f ollama
# Start with CORS enabled (for testing)
OLLAMA_ORIGINS="*" ollama serve
# For production (recommended)
OLLAMA_ORIGINS="https://timecapsule.bubblspace.com/,http://localhost:3000" ollama serveโ "Operation not permitted" Error:
# Method 1: Use sudo
sudo pkill -f ollama
# Method 2: Activity Monitor (GUI)
# 1. Open Activity Monitor (Applications โ Utilities)
# 2. Search for "ollama"
# 3. Select process and click "Force Quit"
# Method 3: Homebrew service (if installed via brew)
brew services stop ollama
brew services start ollamaโ CORS Issues:
# 1. Stop Ollama completely
sudo pkill -f ollama
# 2. Wait 3 seconds
sleep 3
# 3. Start with CORS
OLLAMA_ORIGINS="*" ollama serve
# 4. Test connection
curl http://localhost:11434/api/tags# Official installer (recommended)
curl -fsSL https://ollama.ai/install.sh | sh
# Or download directly from https://ollama.ai# Recommended model
ollama pull qwen3:0.6bFor systemd-based Linux distributions (Ubuntu, Debian, CentOS, etc.):
# 1. Stop any running Ollama instances
ps aux | grep ollama
sudo pkill -f ollama
# 2. Edit the ollama service configuration
sudo systemctl edit ollama.service
# 3. Add the following environment variables:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
# For production, use specific origins:
# Environment="OLLAMA_ORIGINS=https://timecapsule.bubblspace.com/,http://localhost:3000"
# 4. Save and exit the editor (Ctrl+X, then Y, then Enter)
# 5. Reload systemd and restart ollama service
sudo systemctl daemon-reload
sudo systemctl restart ollama.service
# 6. Enable auto-start on boot (optional)
sudo systemctl enable ollama.service
# 7. Verify the service is running
sudo systemctl status ollama.service
# 8. Test the connection
curl http://localhost:11434/api/tagsAlternative: Manual start (if not using systemd):
# Stop any existing processes
sudo pkill -f ollama
# Start manually with CORS
OLLAMA_ORIGINS="*" ollama serve
# Or for production:
# OLLAMA_ORIGINS="https://timecapsule.bubblspace.com/,http://localhost:3000" ollama serveโ Service Issues:
# Check service logs
sudo journalctl -u ollama.service -f
# Restart service
sudo systemctl restart ollama.service
# Check service status
sudo systemctl status ollama.serviceโ Permission Issues:
# Stop with elevated permissions
sudo pkill -f ollama
# Check for lingering processes
ps aux | grep ollama
# Force kill if needed
sudo kill -9 $(pgrep ollama)โ CORS Configuration:
# Verify environment variables are set
sudo systemctl show ollama.service | grep Environment
# If not set, re-edit the service:
sudo systemctl edit ollama.service
# Add Environment variables as shown above
sudo systemctl daemon-reload
sudo systemctl restart ollama.service๐ Reference: Ollama CORS Configuration Guide
# Download from https://ollama.ai and install the .exe
# Or use package manager (if available)# Open Command Prompt or PowerShell
ollama pull qwen3:0.6b# Method 1: Stop existing processes
taskkill /f /im ollama.exe
# Method 2: Start with CORS (Command Prompt)
set OLLAMA_ORIGINS=* && ollama serve
# Method 3: Start with CORS (PowerShell)
$env:OLLAMA_ORIGINS="*"; ollama serve
# For production (specific origins):
# $env:OLLAMA_ORIGINS="https://timecapsule.bubblspace.com/,http://localhost:3000"; ollama serveโ Process Issues:
# Method 1: Task Manager (GUI)
# 1. Open Task Manager (Ctrl+Shift+Esc)
# 2. Look for "ollama.exe" in Processes tab
# 3. Right-click and select "End task"
# Method 2: Command line
taskkill /f /im ollama.exe
# Method 3: Find by port
netstat -ano | findstr :11434
# Note the PID and kill it:
taskkill /f /pid <PID>โ CORS Issues:
# 1. Stop all ollama processes
taskkill /f /im ollama.exe
# 2. Wait 3 seconds
timeout /t 3
# 3. Start with CORS
$env:OLLAMA_ORIGINS="*"; ollama serve
# 4. Test connection (if curl is available)
curl http://localhost:11434/api/tagsโ Environment Variables:
# Set permanently (requires restart)
setx OLLAMA_ORIGINS "*"
# Set for current session only
$env:OLLAMA_ORIGINS="*"# 1. Check if Ollama is running
curl http://localhost:11434/api/tags
# 2. List installed models
ollama list
# 3. Test model response
curl http://localhost:11434/api/generate -d '{
"model": "qwen3:0.6b",
"prompt": "Hello",
"stream": false
}'| Model | Size | Best For | Performance |
|---|---|---|---|
| qwen3:0.6b | ~400MB | Fast responses, testing | ๐๐๐๐โญ |
| qwen2.5:3b | ~2GB | Balanced quality/speed | ๐๐๐๐๐ |
| llama3.2:3b | ~2GB | General purpose | ๐๐๐โญโญ |
# Pull additional models:
ollama pull qwen2.5:3b
ollama pull llama3.2:3bIf everything fails, complete reset:
# 1. Stop all Ollama processes
# macOS/Linux: sudo pkill -f ollama
# Windows: taskkill /f /im ollama.exe
# 2. Wait 5 seconds
sleep 5 # macOS/Linux
# timeout /t 5 # Windows
# 3. Start fresh with CORS
OLLAMA_ORIGINS="*" ollama serve
# Windows PowerShell: $env:OLLAMA_ORIGINS="*"; ollama serve
# 4. Pull a model (in new terminal)
ollama pull qwen3:0.6b
# 5. Test setup
curl http://localhost:11434/api/tags๐ก Pro Tips:
- Linux Users: Use systemctl for persistent CORS configuration
- macOS Users: Use Activity Monitor for stubborn processes
- Windows Users: Use Task Manager or PowerShell for process management
- All Platforms: Use
OLLAMA_ORIGINS="*"for testing, then restrict to specific domains- Always verify your setup with:
curl http://localhost:11434/api/tags
โ ๏ธ Note: Custom URLs only work in local builds, not hosted version.
Easy Setup with ollama-custom.js:
- Edit Configuration File: Open
ollama-custom.jsin the root directory - Add Your IPs: Replace the example IPs with your actual Ollama servers
customIPs: [ "http://10.0.1.69:11434", // Your first Ollama server "http://192.168.1.200:11434", // Your second Ollama server "http://172.16.0.50:9434" // Your third Ollama server ]
- Save and Refresh: Save the file and hard refresh your browser (Ctrl+Shift+R)
Use in App: Click "Connect Ollama" โ accept agreement โ Enter custom URL in popup
- DeepResearch: Click "๐ฆ Connect Ollama"
- Playground: Click "Connect AI" โ Select Ollama
Examples: http://192.168.1.100:11434, http://localhost:9434, https://ollama.mydomain.com
|
|
๐จ KEY REQUIREMENT: You MUST enable CORS in LM Studio for TimeCapsule-SLM to connect.
Step 1: ๐ฅ Download LM Studio from lmstudio.ai and install it
Step 2: ๐ค Download a Model - Search for models like Qwen3 0.6B
Step 3: ๐ Start Local Server - Click "Start Server" in LM Studio (port 1234)
Step 4: โ๏ธ Enable CORS - IMPORTANT: In LM Studio โ Settings โ Server โ Enable "CORS"
Step 5: ๐ Restart Server - Stop and restart the LM Studio server
Step 6: ๐ Connect in TimeCapsule - Select "๐ LM Studio" from AI provider dropdown
Step 7: ๐ Click Connect - TimeCapsule will auto-detect your model
| Model | Size | Best For | Performance |
|---|---|---|---|
| Qwen3 0.6B | ~500MB | Research analysis, detailed coding responses | ๐๐๐๐๐ |
- ๐ Add Topics - Define research areas with descriptions
- ๐ฏ Select Type - Choose from Academic, Market, Technology, Competitive, Trend, Literature
- ๐ Set Depth - Pick Overview, Detailed, or Comprehensive analysis
- ๐ค Generate Research - AI creates structured, professional reports and TimeCapsules
- ๐ค Export Results - Download as
.timecapsule.jsonfiles for sharing
- ๐ Academic - Scholarly analysis with citations and methodology
- ๐ Market - Industry trends, competition, and market analysis
- ๐ง Technology - Technical deep-dives and implementation insights
- ๐ข Competitive - Competitor analysis and market positioning
- ๐ Trend - Emerging patterns and future predictions
- ๐ Literature - Comprehensive literature reviews and surveys
|
|
lib/agent/- Canvas AI agentslib/AIAssistant/- AI backend integrationlib/Designs/- Creative coding templateslib/Pages/- Component librarieslib/Media/- Images and assets
๐งโโ๏ธ Created with โค๏ธ by FireHacker
๐ Made for researchers, creators, developers, and digital artists worldwide
|
๐ฆ Follow @thefirehacker |
๐ฎ Discord Community |
|
Real-time help & discussions |
Direct technical assistance |
Bug reports & feature requests |
Complete guides & tutorials |
๐ง Setup & Installation โข ๐ค AI Integration โข ๐ฌ Research Workflows โข ๐ Document Management โข ๐ฎ Creative Coding โข ๐ TimeCapsule Sharing โข ๐ Troubleshooting
|
๐ฏ Help us reach 100 stars and unlock new features! |
โก Just one click makes a huge difference! |
๐ Your star helps more developers discover TimeCapsule-SLM and supports continued development!
|
๐ Star Gazers |
๐ด Contributors |
๐ Project Stats |
๐ซ Thank you for being part of the TimeCapsule-SLM community! Together, we're revolutionizing AI-powered research and creativity. ๐ซ
