Skip to content

Save your context, your way — import chat history from any AI platform, manage it locally, and export to Claude, ChatGPT, or Gemini

License

Notifications You must be signed in to change notification settings

adityak74/opencontext

Repository files navigation

🔌 MCP Server available — Give Claude persistent memory across every conversation. Install the opencontext MCP server and Claude can save, recall, and search your context automatically. Jump to setup →

opencontext logo

opencontext

Save your context, your way

Import chat history from any AI platform · Manage context with MCP · Export to Claude, ChatGPT, or Gemini

License: MIT Node.js Version TypeScript PRs Welcome Docker Docker Pulls Coverage

FeaturesQuick StartUsageDocumentationContributing


opencontext UI preview


📖 Overview

opencontext is a tool for keeping your AI context portable and persistent. It lets you bring your full conversation history when switching AI assistants, and gives Claude a persistent memory through an MCP server.

  • 🎯 Preferences - AI-analyzed communication style ready for Claude's settings
  • 🧠 Memory - Factual context about you, extracted from your chat history
  • 💬 Conversations - All chats as readable markdown files
  • 🔌 MCP Server - Persistent memory across every Claude conversation

Why Use This?

Switching AI assistants means losing all prior context — your communication style, background, and conversation history. opencontext solves that by:

  1. Importing your chat history from ChatGPT (Gemini support planned)
  2. Analyzing your patterns with local AI (Ollama) to generate preferences and memory
  3. Exporting to Claude, ChatGPT, or Gemini formats
  4. Providing an MCP server so Claude can save and recall context automatically

Result: Claude knows who you are, how you communicate, and can persist new context across every conversation.


✨ Features

🤖 AI-Powered Analysis

  • Generates communication preferences
  • Extracts work context and expertise
  • Identifies current topics and focus
  • Supports multiple LLM models

🔒 Privacy First

  • 100% local processing
  • No external API calls
  • Your data never leaves your machine
  • Dashboard privacy toggle blurs PII

📦 Complete Migration

  • Parses complex conversation trees
  • Handles images and attachments
  • Preserves all metadata
  • Export to Claude, ChatGPT, or Gemini

🔌 MCP Server

  • Persistent context across Claude chats
  • Save, recall, search, and tag memories
  • Works with Claude Code & Claude Desktop
  • Local JSON store at ~/.opencontext/

🚀 Quick Start

Prerequisites

Installation

# Clone the repository
git clone https://github.com/adityak74/opencontext.git
cd opencontext

# Install CLI/MCP dependencies
npm install

# Build the project
npm run build

Option A: Docker (Recommended)

The official image bundles the UI, REST API server, and MCP server into one container. Preferences and context are stored in the mounted volume — no browser storage used.

# Pull and run — UI at http://localhost:3000
docker run -p 3000:3000 \
  -v opencontext-data:/root/.opencontext \
  adityakarnam/opencontext:latest

Ollama on your host machine is automatically reachable via host.docker.internal:11434. To use a different host:

docker run -p 3000:3000 \
  -e OLLAMA_HOST=http://my-ollama-host:11434 \
  -v opencontext-data:/root/.opencontext \
  adityakarnam/opencontext:latest

Or build locally:

docker build -t adityakarnam/opencontext:latest .
docker run -p 3000:3000 -v opencontext-data:/root/.opencontext adityakarnam/opencontext:latest

What gets stored in the volume (/root/.opencontext/):

File Contents
preferences.json Your structured preferences (form data)
preferences.md Generated Claude preferences doc (ready to paste)
memory.md Generated Claude memory doc (ready to paste)
contexts.json MCP context store (saved memories)

Option B: Local Development (UI + Server)

The UI talks to the backend server for all data — no localStorage. Start both:

# Terminal 1 — API server (port 3000)
npm install
npm run server

# Terminal 2 — UI dev server (port 5173, proxies /api → 3000)
cd ui && npm install && npm run dev

Open http://localhost:5173. Preferences are saved server-side to ~/.opencontext/.

Option C: CLI

# Convert your ChatGPT export
npm start -- convert path/to/chatgpt-export.zip

# Output will be in ./claude-export/

That's it! 🎉 You now have files ready to paste into Claude.


📂 What Gets Generated

claude-export/
├── 📋 preferences.md       # Paste into Claude Settings → Preferences
├── 🧠 memory.md            # Paste into Claude → Manage Memory
├── 👤 user-profile.md      # Your ChatGPT account info
├── 📑 index.md             # Searchable conversation list
└── 💬 conversations/       # Individual markdown files
    ├── 001-first-chat.md
    ├── 002-another-topic.md
    └── ...

preferences.md - Communication Style

What it contains:

  • How you prefer explanations (detailed, concise, step-by-step)
  • Technical depth preferences
  • Tone preferences (casual/formal)
  • Code formatting preferences

Example:

I prefer clear and direct explanations that get straight to the point,
especially when the topic is technical. I'd like step-by-step instructions
and concrete code snippets. I'm comfortable with technical language and
enjoy seeing code formatted in Markdown blocks...

Usage: Copy → Paste into Claude Settings → Preferences field

memory.md - About You

What it contains:

  • Work context - Your job, technologies, projects
  • Personal context - Education, expertise, skills
  • Top of mind - Current focus, recent topics

Example:

Work context:
User is a senior software engineer working with cloud infrastructure,
Docker, Kubernetes, and VPN solutions. Currently developing AI/ML
deployment systems...

Personal context:
Demonstrates expertise in networking, containerization, Python,
TypeScript, and CI/CD automation...

Top of mind:
Finalizing VPN architecture decisions and exploring AI service
deployment strategies...

Usage: Copy → Paste into Claude → Manage Memory


💻 Usage

CLI Commands

npm start -- convert <zip-file> [options]

Options

Option Description Default
-o, --output <dir> Output directory ./claude-export
--model <name> Ollama model to use gpt-oss:20b
--ollama-host <url> Ollama server URL http://localhost:11434
--skip-preferences Skip AI analysis (faster) false
--verbose Detailed logging false
-h, --help Show help -

Examples

Remote Ollama Server

npm start -- convert export.zip --ollama-host http://192.168.1.100:11434

Different AI Model

npm start -- convert export.zip --model qwen2.5:32b

Fast Mode (No AI)

npm start -- convert export.zip --skip-preferences

Custom Output Directory

npm start -- convert export.zip -o ~/Documents/claude-import

All Options Combined

npm start -- convert export.zip \
  -o ~/output \
  --ollama-host http://gpu-server:11434 \
  --model llama3:70b \
  --verbose

📚 Documentation

Getting Your ChatGPT Export

  1. Go to ChatGPT Settings
  2. Click profile → SettingsData Controls
  3. Click Export data
  4. Wait for email (usually 1-4 hours)
  5. Download the zip file
  6. Use with opencontext

Migrating to Claude

Step 1: Set Preferences

  1. Open preferences.md
  2. Copy all text
  3. Go to Claude Settings → Preferences
  4. Paste into "What personal preferences should Claude consider?"
  5. Save changes

Step 2: Add Memory

  1. Open memory.md
  2. Copy all text
  3. Click profile → Manage Memory
  4. Paste content
  5. Verify and save

Step 3: Use Conversations (Optional)

Browse conversations/ folder and copy relevant chats into Claude for context.

Alternative: Create a Claude project and upload files as project knowledge.

Supported Ollama Models

Model Size Speed Quality Recommended For
gpt-oss:20b 13GB Medium High Best overall results
qwen2.5:32b 20GB Medium High Technical content
llama3:70b 40GB Slow Highest Maximum accuracy
llama3:8b 5GB Fast Good Quick conversions

How It Works

graph LR
    A[ChatGPT ZIP] --> B[Extract]
    B --> C[Parse conversations.json]
    C --> D[Normalize Format]
    D --> E[Generate Markdown]
    E --> F{AI Analysis?}
    F -->|Yes| G[Ollama]
    F -->|No| H[Basic Stats]
    G --> I[preferences.md]
    G --> J[memory.md]
    H --> I
    H --> J
    E --> K[conversations/]
Loading

Two AI calls:

  1. Preferences - Analyzes communication patterns (HOW you talk)
  2. Memory - Extracts facts about you (WHO you are)

🛠️ Development

Setup Development Environment

# Clone the repo
git clone https://github.com/adityak74/opencontext.git
cd opencontext

# Install dependencies
npm install
cd ui && npm install && cd ..

# Build TypeScript (CLI + server + MCP)
npm run build

# Run tests
npm test

# Run tests with coverage
npm run test:coverage

Running the full stack locally

The UI talks to the backend server for all data — start both:

# Terminal 1 — API + MCP server (port 3000)
npm run server

# Terminal 2 — UI dev server (port 5173, proxies /api → 3000)
cd ui && npm run dev

Open http://localhost:5173. Preferences are saved server-side to ~/.opencontext/.

Project Structure

opencontext/
├── src/                        # CLI + HTTP server + MCP server
│   ├── index.ts                # CLI entry point (Commander.js)
│   ├── server.ts               # Express HTTP server (UI + REST API)
│   ├── extractor.ts            # ZIP extraction & temp management
│   ├── parsers/
│   │   ├── types.ts            # TypeScript interfaces
│   │   ├── chatgpt.ts          # Parse ChatGPT format
│   │   └── normalizer.ts       # Normalize to common schema
│   ├── formatters/
│   │   └── markdown.ts         # Markdown generation
│   ├── analyzers/
│   │   └── ollama-preferences.ts  # AI-powered analysis (Ollama)
│   ├── utils/
│   │   └── file.ts             # File I/O utilities
│   └── mcp/                    # MCP server
│       ├── index.ts            # Entry point (stdio transport)
│       ├── server.ts           # Tool definitions
│       ├── store.ts            # JSON-based context store
│       └── types.ts            # Type definitions
│
└── ui/                         # Web dashboard (React + Vite)
    └── src/
        ├── components/
        │   ├── Dashboard.tsx       # Context overview + privacy toggle
        │   ├── PreferencesEditor.tsx
        │   ├── ContextViewer.tsx
        │   ├── ConversionPipeline.tsx
        │   └── VendorExport.tsx
        ├── store/context.tsx       # React Context state
        ├── types/preferences.ts   # Shared types
        └── exporters/             # Claude, ChatGPT, Gemini exporters

Tech Stack

CLI / HTTP Server / MCP Server

  • TypeScript 5.9 - Type-safe development
  • Express 5 - HTTP server (REST API + static UI)
  • Multer - Multipart file upload handling
  • Commander.js - CLI framework
  • @modelcontextprotocol/sdk - MCP server
  • Ollama - Local LLM inference (optional)
  • adm-zip - ZIP file handling
  • chalk - Terminal colors

Web UI

  • React 19 + Vite 7 - UI framework and build tool
  • React Router 7 - Client-side routing
  • Tailwind CSS v4 - Utility-first styling
  • shadcn/ui - Component library (new-york style)
  • Lucide React - Icons

🔌 MCP Server

The opencontext MCP server lets Claude remember things across conversations using a persistent local store.

Available Tools

Tool Trigger phrase
save_context "remember this", "save this", "keep this in mind"
recall_context "what did I say about...", "do you remember..."
list_contexts "show my saved contexts"
search_contexts Multi-keyword AND search
update_context Update a context by ID
delete_context Delete a context by ID

Context is stored at ~/.opencontext/contexts.json. Set OPENCONTEXT_STORE_PATH to use a custom location.

Connect to Claude Code

# Build first
npm run build

Add to ~/.claude/settings.json:

{
  "mcpServers": {
    "opencontext": {
      "command": "node",
      "args": ["/path/to/opencontext/dist/mcp/index.js"]
    }
  }
}

Dev Mode (no build required)

{
  "mcpServers": {
    "opencontext": {
      "command": "npx",
      "args": ["tsx", "/path/to/opencontext/src/mcp/index.ts"]
    }
  }
}

The Dashboard page in the web UI shows this setup guide with copy buttons.


🐳 Docker

Docker Hub: hub.docker.com/r/adityakarnam/opencontext The official image (adityakarnam/opencontext:latest) has been scanned and contains no critical vulnerabilities.

The official image is a single container that bundles the React UI, the REST API server, and the MCP server — all based on node:25-slim.

Quick start

docker pull adityakarnam/opencontext:latest

docker run -p 3000:3000 \
  -v opencontext-data:/root/.opencontext \
  adityakarnam/opencontext:latest

Open http://localhost:3000.

With docker compose

docker compose up app

Persistent storage

All data is stored in the mounted volume — no browser localStorage is used. The UI reads and writes directly to the server.

File in /root/.opencontext/ Description
preferences.json Your structured preferences (used by the UI form)
preferences.md Claude preferences doc — paste into Claude Settings → Preferences
memory.md Claude memory doc — paste into Claude → Manage Memory
contexts.json MCP context entries saved by Claude

Environment variables

Variable Default Description
PORT 3000 HTTP server port
OLLAMA_HOST http://host.docker.internal:11434 Ollama endpoint — automatically reaches Ollama running on your host machine
OLLAMA_MODEL gpt-oss:20b Default model for preference analysis
OPENCONTEXT_STORE_PATH /root/.opencontext/contexts.json MCP context store path (preferences files live in the same directory)

host.docker.internal is a special DNS name that resolves to your host machine's IP from inside a Docker container. On Linux you may need --add-host=host.docker.internal:host-gateway.

REST API

The server exposes a REST API alongside the UI:

Endpoint Description
GET /api/health Health check + active config
GET /api/ollama/models List available Ollama models on the host
POST /api/convert Upload a ChatGPT ZIP, run full conversion pipeline
GET /api/preferences Load saved preferences (used by the UI on mount)
PUT /api/preferences Save preferences — writes preferences.json, preferences.md, memory.md
GET /api/contexts List saved MCP contexts (optional ?tag= filter)
POST /api/contexts Save a new context
GET /api/contexts/search?q= Search contexts
GET /api/contexts/:id Get a context by ID
PUT /api/contexts/:id Update a context
DELETE /api/contexts/:id Delete a context

MCP stdio mode

The same image can be used as an MCP server by overriding the command:

docker run -i --rm \
  -v opencontext-data:/root/.opencontext \
  adityakarnam/opencontext:latest \
  node dist/mcp/index.js

Connect to Claude Code

Add to ~/.claude/settings.json:

{
  "mcpServers": {
    "opencontext": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "-v", "opencontext-data:/root/.opencontext",
               "adityakarnam/opencontext:latest", "node", "dist/mcp/index.js"]
    }
  }
}

Connect to Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json, then restart Claude Desktop:

{
  "mcpServers": {
    "opencontext": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "-v", "opencontext-data:/root/.opencontext",
               "adityakarnam/opencontext:latest", "node", "dist/mcp/index.js"]
    }
  }
}

Usage in Claude

Once connected, Claude can save and recall context automatically. Just ask naturally:

Saving context — Claude uses save_context to store a summary with tags:

Save context via opencontext MCP in Claude Desktop

Searching context — Claude uses search_contexts to find previously saved entries:

Search context via opencontext MCP in Claude Desktop


🐛 Troubleshooting

Docker Issues

Container exits immediately with ERR_MODULE_NOT_FOUND

Make sure you're using the latest image — an older build may have missing .js extensions in ESM imports:

docker pull adityakarnam/opencontext:latest
docker run -p 3000:3000 -v opencontext-data:/root/.opencontext adityakarnam/opencontext:latest

UI can't reach Ollama

Ollama must be running on your host machine. The container uses host.docker.internal:11434 by default. On Linux, add:

docker run -p 3000:3000 \
  --add-host=host.docker.internal:host-gateway \
  -v opencontext-data:/root/.opencontext \
  adityakarnam/opencontext:latest
Ollama Issues

"Ollama is not running"

ollama serve

"Model not found"

ollama list
ollama pull gpt-oss:20b

Connection refused

# Check Ollama
curl http://localhost:11434/api/tags

# Or use remote server
npm start -- convert export.zip --ollama-host http://your-server:11434
Export Issues

"conversations.json not found"

  • Verify zip file is correct ChatGPT export
  • Re-download if corrupted

"No valid conversations"

  • Check you have conversations in ChatGPT
  • Try exporting again
Performance Issues

Analysis is slow

  • Use --skip-preferences for instant conversion
  • Try faster model: --model llama3:8b
  • Use remote GPU: --ollama-host http://gpu-server:11434

Out of memory

  • Normal for 100+ conversations
  • Tool handles gracefully with truncation

🤝 Contributing

We welcome contributions! Here's how to get involved:

Ways to Contribute

  • 🐛 Report bugs - Open an issue
  • 💡 Suggest features - Start a discussion
  • 📖 Improve docs - Fix typos, add examples
  • 🔧 Submit code - Fix bugs, add features

Development Workflow

  1. Fork the repository
  2. Clone your fork: git clone https://github.com/adityak74/opencontext.git
  3. Create a branch: git checkout -b feature/amazing-feature
  4. Make changes and test thoroughly
  5. Commit with clear message: git commit -m 'Add amazing feature'
  6. Push to your fork: git push origin feature/amazing-feature
  7. Open a Pull Request

Guidelines

  • Follow existing code style and conventions
  • Add comments for complex logic
  • Test with real ChatGPT exports
  • Update documentation for new features
  • Keep PRs focused (one feature/fix per PR)

Code of Conduct

Be respectful, inclusive, and collaborative. See CODE_OF_CONDUCT.md for details.


📊 Performance

Typical conversion times:

Export Size Conversations With AI Analysis Without AI
Small 1-20 ~1 minute ~5 seconds
Medium 20-100 ~5 minutes ~10 seconds
Large 100+ ~15 minutes ~30 seconds

Factors affecting speed:

  • Model size (gpt-oss:20b slower than llama3:8b)
  • Hardware (GPU vs CPU)
  • Ollama location (local vs remote)
  • Number of conversations

🔒 Privacy & Security

Local Processing

All data stays on your machine

  • No external API calls (except your Ollama server)
  • No telemetry or analytics
  • No data collection
  • Safe for sensitive conversations

What Gets Sent to Ollama

Only when AI analysis is enabled:

  • Conversation text → Ollama (your infrastructure)
  • You control the data and infrastructure

Want pure local processing?

npm start -- convert export.zip --skip-preferences

⚠️ Limitations

  • ChatGPT only - Currently supports only ChatGPT exports (Gemini planned)
  • Manual Claude import - No direct API (paste manually)
  • Image references - Images copied but not embedded
  • Token limits - Very large exports may be truncated

🗺️ Roadmap

Planned Features

  • Google Gemini export support
  • Perplexity export support
  • Claude Projects API integration (when available)
  • Conversation search and filtering
  • Better image handling (Base64 embedding)
  • Multi-language support
  • Web UI dashboard with privacy toggle
  • MCP server for persistent context
  • Export to Claude, ChatGPT, and Gemini formats
  • Automated tests
  • Docker support (Web UI + MCP server)

Future Possibilities

  • Direct Claude API integration
  • Conversation merging/combining
  • Custom prompt templates
  • Browser extension

Vote on features: Discussions


❓ FAQ

Is this officially supported by Anthropic or OpenAI? No, this is a community-built tool. Not affiliated with either company.
Do I need Ollama? No, but AI analysis produces much better preferences and memory. Use --skip-preferences to skip it.
Can I use Claude API instead of Ollama? Not yet. Ollama is free and runs locally. We may add Claude API support later.
Will this work with 1000+ conversations? Yes! Markdown conversion works regardless of size. AI analysis may truncate input but still produces useful results.
Can I edit the generated files? Absolutely! They're just text files. Edit before pasting into Claude.
Does this modify my ChatGPT account? No, it only reads the export. Your ChatGPT data is unchanged.
Can I run this multiple times? Yes, it will overwrite previous output. Use different -o directories to keep versions.
Is my data safe? Yes! Everything runs locally. No external APIs except your own Ollama server.

📜 License

This project is licensed under the MIT License - see the LICENSE file for details.

MIT License

Copyright (c) 2026 opencontext contributors

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software...

🙏 Acknowledgments

  • Anthropic - For building Claude, the AI that inspired this tool
  • OpenAI - For ChatGPT and conversation export functionality
  • Ollama - For making local LLM inference accessible
  • Contributors - Everyone who has contributed code, ideas, and feedback
  • Community - Users who test and provide valuable feedback

Built With


💬 Support & Community

  • 🐛 Bug Reports: GitHub Issues
  • 💡 Feature Requests: GitHub Discussions
  • Questions: Open an issue with question label
  • 📧 Contact: Open an issue for direct contact

Useful Links


⭐ Star us on GitHub — it motivates us to keep improving!

Made with ❤️ by the AI community

Save your context, your way — portable AI history across every platform

Report BugRequest FeatureView Roadmap

About

Save your context, your way — import chat history from any AI platform, manage it locally, and export to Claude, ChatGPT, or Gemini

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published