🔌 MCP Server available — Give Claude persistent memory across every conversation. Install the opencontext MCP server and Claude can save, recall, and search your context automatically. Jump to setup →
Import chat history from any AI platform · Manage context with MCP · Export to Claude, ChatGPT, or Gemini
Features • Quick Start • Usage • Documentation • Contributing
opencontext is a tool for keeping your AI context portable and persistent. It lets you bring your full conversation history when switching AI assistants, and gives Claude a persistent memory through an MCP server.
- 🎯 Preferences - AI-analyzed communication style ready for Claude's settings
- 🧠 Memory - Factual context about you, extracted from your chat history
- 💬 Conversations - All chats as readable markdown files
- 🔌 MCP Server - Persistent memory across every Claude conversation
Switching AI assistants means losing all prior context — your communication style, background, and conversation history. opencontext solves that by:
- Importing your chat history from ChatGPT (Gemini support planned)
- Analyzing your patterns with local AI (Ollama) to generate preferences and memory
- Exporting to Claude, ChatGPT, or Gemini formats
- Providing an MCP server so Claude can save and recall context automatically
Result: Claude knows who you are, how you communicate, and can persist new context across every conversation.
|
|
|
|
- Node.js 25+ - Download
- ChatGPT Export - How to export
- Ollama (optional) - Install for AI analysis
# Clone the repository
git clone https://github.com/adityak74/opencontext.git
cd opencontext
# Install CLI/MCP dependencies
npm install
# Build the project
npm run buildThe official image bundles the UI, REST API server, and MCP server into one container. Preferences and context are stored in the mounted volume — no browser storage used.
# Pull and run — UI at http://localhost:3000
docker run -p 3000:3000 \
-v opencontext-data:/root/.opencontext \
adityakarnam/opencontext:latestOllama on your host machine is automatically reachable via host.docker.internal:11434. To use a different host:
docker run -p 3000:3000 \
-e OLLAMA_HOST=http://my-ollama-host:11434 \
-v opencontext-data:/root/.opencontext \
adityakarnam/opencontext:latestOr build locally:
docker build -t adityakarnam/opencontext:latest .
docker run -p 3000:3000 -v opencontext-data:/root/.opencontext adityakarnam/opencontext:latestWhat gets stored in the volume (/root/.opencontext/):
| File | Contents |
|---|---|
preferences.json |
Your structured preferences (form data) |
preferences.md |
Generated Claude preferences doc (ready to paste) |
memory.md |
Generated Claude memory doc (ready to paste) |
contexts.json |
MCP context store (saved memories) |
The UI talks to the backend server for all data — no localStorage. Start both:
# Terminal 1 — API server (port 3000)
npm install
npm run server
# Terminal 2 — UI dev server (port 5173, proxies /api → 3000)
cd ui && npm install && npm run devOpen http://localhost:5173. Preferences are saved server-side to ~/.opencontext/.
# Convert your ChatGPT export
npm start -- convert path/to/chatgpt-export.zip
# Output will be in ./claude-export/That's it! 🎉 You now have files ready to paste into Claude.
claude-export/
├── 📋 preferences.md # Paste into Claude Settings → Preferences
├── 🧠 memory.md # Paste into Claude → Manage Memory
├── 👤 user-profile.md # Your ChatGPT account info
├── 📑 index.md # Searchable conversation list
└── 💬 conversations/ # Individual markdown files
├── 001-first-chat.md
├── 002-another-topic.md
└── ...
What it contains:
- How you prefer explanations (detailed, concise, step-by-step)
- Technical depth preferences
- Tone preferences (casual/formal)
- Code formatting preferences
Example:
I prefer clear and direct explanations that get straight to the point,
especially when the topic is technical. I'd like step-by-step instructions
and concrete code snippets. I'm comfortable with technical language and
enjoy seeing code formatted in Markdown blocks...
Usage: Copy → Paste into Claude Settings → Preferences field
What it contains:
- Work context - Your job, technologies, projects
- Personal context - Education, expertise, skills
- Top of mind - Current focus, recent topics
Example:
Work context:
User is a senior software engineer working with cloud infrastructure,
Docker, Kubernetes, and VPN solutions. Currently developing AI/ML
deployment systems...
Personal context:
Demonstrates expertise in networking, containerization, Python,
TypeScript, and CI/CD automation...
Top of mind:
Finalizing VPN architecture decisions and exploring AI service
deployment strategies...
Usage: Copy → Paste into Claude → Manage Memory
npm start -- convert <zip-file> [options]| Option | Description | Default |
|---|---|---|
-o, --output <dir> |
Output directory | ./claude-export |
--model <name> |
Ollama model to use | gpt-oss:20b |
--ollama-host <url> |
Ollama server URL | http://localhost:11434 |
--skip-preferences |
Skip AI analysis (faster) | false |
--verbose |
Detailed logging | false |
-h, --help |
Show help | - |
npm start -- convert export.zip --ollama-host http://192.168.1.100:11434npm start -- convert export.zip --model qwen2.5:32bnpm start -- convert export.zip --skip-preferencesnpm start -- convert export.zip -o ~/Documents/claude-importnpm start -- convert export.zip \
-o ~/output \
--ollama-host http://gpu-server:11434 \
--model llama3:70b \
--verbose- Go to ChatGPT Settings
- Click profile → Settings → Data Controls
- Click Export data
- Wait for email (usually 1-4 hours)
- Download the zip file
- Use with opencontext
- Open
preferences.md - Copy all text
- Go to Claude Settings → Preferences
- Paste into "What personal preferences should Claude consider?"
- Save changes
- Open
memory.md - Copy all text
- Click profile → Manage Memory
- Paste content
- Verify and save
Browse conversations/ folder and copy relevant chats into Claude for context.
Alternative: Create a Claude project and upload files as project knowledge.
| Model | Size | Speed | Quality | Recommended For |
|---|---|---|---|---|
gpt-oss:20b |
13GB | Medium | High | Best overall results |
qwen2.5:32b |
20GB | Medium | High | Technical content |
llama3:70b |
40GB | Slow | Highest | Maximum accuracy |
llama3:8b |
5GB | Fast | Good | Quick conversions |
graph LR
A[ChatGPT ZIP] --> B[Extract]
B --> C[Parse conversations.json]
C --> D[Normalize Format]
D --> E[Generate Markdown]
E --> F{AI Analysis?}
F -->|Yes| G[Ollama]
F -->|No| H[Basic Stats]
G --> I[preferences.md]
G --> J[memory.md]
H --> I
H --> J
E --> K[conversations/]
Two AI calls:
- Preferences - Analyzes communication patterns (HOW you talk)
- Memory - Extracts facts about you (WHO you are)
# Clone the repo
git clone https://github.com/adityak74/opencontext.git
cd opencontext
# Install dependencies
npm install
cd ui && npm install && cd ..
# Build TypeScript (CLI + server + MCP)
npm run build
# Run tests
npm test
# Run tests with coverage
npm run test:coverageThe UI talks to the backend server for all data — start both:
# Terminal 1 — API + MCP server (port 3000)
npm run server
# Terminal 2 — UI dev server (port 5173, proxies /api → 3000)
cd ui && npm run devOpen http://localhost:5173. Preferences are saved server-side to ~/.opencontext/.
opencontext/
├── src/ # CLI + HTTP server + MCP server
│ ├── index.ts # CLI entry point (Commander.js)
│ ├── server.ts # Express HTTP server (UI + REST API)
│ ├── extractor.ts # ZIP extraction & temp management
│ ├── parsers/
│ │ ├── types.ts # TypeScript interfaces
│ │ ├── chatgpt.ts # Parse ChatGPT format
│ │ └── normalizer.ts # Normalize to common schema
│ ├── formatters/
│ │ └── markdown.ts # Markdown generation
│ ├── analyzers/
│ │ └── ollama-preferences.ts # AI-powered analysis (Ollama)
│ ├── utils/
│ │ └── file.ts # File I/O utilities
│ └── mcp/ # MCP server
│ ├── index.ts # Entry point (stdio transport)
│ ├── server.ts # Tool definitions
│ ├── store.ts # JSON-based context store
│ └── types.ts # Type definitions
│
└── ui/ # Web dashboard (React + Vite)
└── src/
├── components/
│ ├── Dashboard.tsx # Context overview + privacy toggle
│ ├── PreferencesEditor.tsx
│ ├── ContextViewer.tsx
│ ├── ConversionPipeline.tsx
│ └── VendorExport.tsx
├── store/context.tsx # React Context state
├── types/preferences.ts # Shared types
└── exporters/ # Claude, ChatGPT, Gemini exporters
CLI / HTTP Server / MCP Server
- TypeScript 5.9 - Type-safe development
- Express 5 - HTTP server (REST API + static UI)
- Multer - Multipart file upload handling
- Commander.js - CLI framework
- @modelcontextprotocol/sdk - MCP server
- Ollama - Local LLM inference (optional)
- adm-zip - ZIP file handling
- chalk - Terminal colors
Web UI
- React 19 + Vite 7 - UI framework and build tool
- React Router 7 - Client-side routing
- Tailwind CSS v4 - Utility-first styling
- shadcn/ui - Component library (new-york style)
- Lucide React - Icons
The opencontext MCP server lets Claude remember things across conversations using a persistent local store.
| Tool | Trigger phrase |
|---|---|
save_context |
"remember this", "save this", "keep this in mind" |
recall_context |
"what did I say about...", "do you remember..." |
list_contexts |
"show my saved contexts" |
search_contexts |
Multi-keyword AND search |
update_context |
Update a context by ID |
delete_context |
Delete a context by ID |
Context is stored at ~/.opencontext/contexts.json. Set OPENCONTEXT_STORE_PATH to use a custom location.
# Build first
npm run buildAdd to ~/.claude/settings.json:
{
"mcpServers": {
"opencontext": {
"command": "node",
"args": ["/path/to/opencontext/dist/mcp/index.js"]
}
}
}{
"mcpServers": {
"opencontext": {
"command": "npx",
"args": ["tsx", "/path/to/opencontext/src/mcp/index.ts"]
}
}
}The Dashboard page in the web UI shows this setup guide with copy buttons.
Docker Hub: hub.docker.com/r/adityakarnam/opencontext The official image (adityakarnam/opencontext:latest) has been scanned and contains no critical vulnerabilities.
The official image is a single container that bundles the React UI, the REST API server, and the MCP server — all based on node:25-slim.
docker pull adityakarnam/opencontext:latest
docker run -p 3000:3000 \
-v opencontext-data:/root/.opencontext \
adityakarnam/opencontext:latestOpen http://localhost:3000.
docker compose up appAll data is stored in the mounted volume — no browser localStorage is used. The UI reads and writes directly to the server.
File in /root/.opencontext/ |
Description |
|---|---|
preferences.json |
Your structured preferences (used by the UI form) |
preferences.md |
Claude preferences doc — paste into Claude Settings → Preferences |
memory.md |
Claude memory doc — paste into Claude → Manage Memory |
contexts.json |
MCP context entries saved by Claude |
| Variable | Default | Description |
|---|---|---|
PORT |
3000 |
HTTP server port |
OLLAMA_HOST |
http://host.docker.internal:11434 |
Ollama endpoint — automatically reaches Ollama running on your host machine |
OLLAMA_MODEL |
gpt-oss:20b |
Default model for preference analysis |
OPENCONTEXT_STORE_PATH |
/root/.opencontext/contexts.json |
MCP context store path (preferences files live in the same directory) |
host.docker.internal is a special DNS name that resolves to your host machine's IP from inside a Docker container. On Linux you may need --add-host=host.docker.internal:host-gateway.
The server exposes a REST API alongside the UI:
| Endpoint | Description |
|---|---|
GET /api/health |
Health check + active config |
GET /api/ollama/models |
List available Ollama models on the host |
POST /api/convert |
Upload a ChatGPT ZIP, run full conversion pipeline |
GET /api/preferences |
Load saved preferences (used by the UI on mount) |
PUT /api/preferences |
Save preferences — writes preferences.json, preferences.md, memory.md |
GET /api/contexts |
List saved MCP contexts (optional ?tag= filter) |
POST /api/contexts |
Save a new context |
GET /api/contexts/search?q= |
Search contexts |
GET /api/contexts/:id |
Get a context by ID |
PUT /api/contexts/:id |
Update a context |
DELETE /api/contexts/:id |
Delete a context |
The same image can be used as an MCP server by overriding the command:
docker run -i --rm \
-v opencontext-data:/root/.opencontext \
adityakarnam/opencontext:latest \
node dist/mcp/index.jsAdd to ~/.claude/settings.json:
{
"mcpServers": {
"opencontext": {
"command": "docker",
"args": ["run", "-i", "--rm", "-v", "opencontext-data:/root/.opencontext",
"adityakarnam/opencontext:latest", "node", "dist/mcp/index.js"]
}
}
}Add to ~/Library/Application Support/Claude/claude_desktop_config.json, then restart Claude Desktop:
{
"mcpServers": {
"opencontext": {
"command": "docker",
"args": ["run", "-i", "--rm", "-v", "opencontext-data:/root/.opencontext",
"adityakarnam/opencontext:latest", "node", "dist/mcp/index.js"]
}
}
}Once connected, Claude can save and recall context automatically. Just ask naturally:
Saving context — Claude uses save_context to store a summary with tags:
Searching context — Claude uses search_contexts to find previously saved entries:
Docker Issues
Container exits immediately with ERR_MODULE_NOT_FOUND
Make sure you're using the latest image — an older build may have missing .js extensions in ESM imports:
docker pull adityakarnam/opencontext:latest
docker run -p 3000:3000 -v opencontext-data:/root/.opencontext adityakarnam/opencontext:latestUI can't reach Ollama
Ollama must be running on your host machine. The container uses host.docker.internal:11434 by default. On Linux, add:
docker run -p 3000:3000 \
--add-host=host.docker.internal:host-gateway \
-v opencontext-data:/root/.opencontext \
adityakarnam/opencontext:latestOllama Issues
"Ollama is not running"
ollama serve"Model not found"
ollama list
ollama pull gpt-oss:20bConnection refused
# Check Ollama
curl http://localhost:11434/api/tags
# Or use remote server
npm start -- convert export.zip --ollama-host http://your-server:11434Export Issues
"conversations.json not found"
- Verify zip file is correct ChatGPT export
- Re-download if corrupted
"No valid conversations"
- Check you have conversations in ChatGPT
- Try exporting again
Performance Issues
Analysis is slow
- Use
--skip-preferencesfor instant conversion - Try faster model:
--model llama3:8b - Use remote GPU:
--ollama-host http://gpu-server:11434
Out of memory
- Normal for 100+ conversations
- Tool handles gracefully with truncation
We welcome contributions! Here's how to get involved:
- 🐛 Report bugs - Open an issue
- 💡 Suggest features - Start a discussion
- 📖 Improve docs - Fix typos, add examples
- 🔧 Submit code - Fix bugs, add features
- Fork the repository
- Clone your fork:
git clone https://github.com/adityak74/opencontext.git - Create a branch:
git checkout -b feature/amazing-feature - Make changes and test thoroughly
- Commit with clear message:
git commit -m 'Add amazing feature' - Push to your fork:
git push origin feature/amazing-feature - Open a Pull Request
- Follow existing code style and conventions
- Add comments for complex logic
- Test with real ChatGPT exports
- Update documentation for new features
- Keep PRs focused (one feature/fix per PR)
Be respectful, inclusive, and collaborative. See CODE_OF_CONDUCT.md for details.
Typical conversion times:
| Export Size | Conversations | With AI Analysis | Without AI |
|---|---|---|---|
| Small | 1-20 | ~1 minute | ~5 seconds |
| Medium | 20-100 | ~5 minutes | ~10 seconds |
| Large | 100+ | ~15 minutes | ~30 seconds |
Factors affecting speed:
- Model size (gpt-oss:20b slower than llama3:8b)
- Hardware (GPU vs CPU)
- Ollama location (local vs remote)
- Number of conversations
✅ All data stays on your machine
- No external API calls (except your Ollama server)
- No telemetry or analytics
- No data collection
- Safe for sensitive conversations
Only when AI analysis is enabled:
- Conversation text → Ollama (your infrastructure)
- You control the data and infrastructure
Want pure local processing?
npm start -- convert export.zip --skip-preferences- ChatGPT only - Currently supports only ChatGPT exports (Gemini planned)
- Manual Claude import - No direct API (paste manually)
- Image references - Images copied but not embedded
- Token limits - Very large exports may be truncated
- Google Gemini export support
- Perplexity export support
- Claude Projects API integration (when available)
- Conversation search and filtering
- Better image handling (Base64 embedding)
- Multi-language support
- Web UI dashboard with privacy toggle
- MCP server for persistent context
- Export to Claude, ChatGPT, and Gemini formats
- Automated tests
- Docker support (Web UI + MCP server)
- Direct Claude API integration
- Conversation merging/combining
- Custom prompt templates
- Browser extension
Vote on features: Discussions
Is this officially supported by Anthropic or OpenAI?
No, this is a community-built tool. Not affiliated with either company.Do I need Ollama?
No, but AI analysis produces much better preferences and memory. Use--skip-preferences to skip it.
Can I use Claude API instead of Ollama?
Not yet. Ollama is free and runs locally. We may add Claude API support later.Will this work with 1000+ conversations?
Yes! Markdown conversion works regardless of size. AI analysis may truncate input but still produces useful results.Can I edit the generated files?
Absolutely! They're just text files. Edit before pasting into Claude.Does this modify my ChatGPT account?
No, it only reads the export. Your ChatGPT data is unchanged.Can I run this multiple times?
Yes, it will overwrite previous output. Use different-o directories to keep versions.
Is my data safe?
Yes! Everything runs locally. No external APIs except your own Ollama server.This project is licensed under the MIT License - see the LICENSE file for details.
MIT License
Copyright (c) 2026 opencontext contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software...
- Anthropic - For building Claude, the AI that inspired this tool
- OpenAI - For ChatGPT and conversation export functionality
- Ollama - For making local LLM inference accessible
- Contributors - Everyone who has contributed code, ideas, and feedback
- Community - Users who test and provide valuable feedback
- Node.js - JavaScript runtime
- TypeScript - Type-safe development
- Commander.js - CLI framework
- Ollama - Local LLM inference
- adm-zip - ZIP file handling
- chalk - Terminal styling
- 🐛 Bug Reports: GitHub Issues
- 💡 Feature Requests: GitHub Discussions
- ❓ Questions: Open an issue with
questionlabel - 📧 Contact: Open an issue for direct contact
Made with ❤️ by the AI community
Save your context, your way — portable AI history across every platform


