Contora gives AI a continuous understanding of your workspace, goals, Git changes, and coding sessions.
Instead of losing context every chat, model switch, IDE restart, or session recovery,
Contora continuously maintains a structured workspace memory for AI.
Built for developers who work with AI every day.
Most AI coding tools forget everything.
You switch:
- chats
- models
- sessions
- machines
- IDE windows
…and your AI assistant loses track of:
- what you were building
- which files mattered
- recent Git changes
- project intent
- active debugging context
Contora fixes that.
It continuously tracks your active workspace and turns it into a persistent AI memory layer.
So your AI can continue where you left off.
Contora is not another AI chat panel.
It is a:
Contora continuously maintains:
- current focus
- active files
- recent edits
- Git changes
- workspace intent
- event history
- session summaries
- compressed context memory
All stored locally inside your workspace.
Close Cursor today.
Open it tomorrow.
Your AI still understands:
- what you were working on
- which files mattered
- what changed
- your coding goal
- recent project activity
Contora continuously tracks:
- working-set files
- recent activity
- Git changes
- active editors
- project focus
Perfect for:
- monorepos
- long refactors
- debugging sessions
- AI agent workflows
- large enterprise projects
Move between:
- GPT
- Claude
- Gemini
- DeepSeek
without rebuilding workspace memory every time.
Large projects can quickly explode token costs.
Contora reduces unnecessary AI context by:
- tracking only active workspace changes
- prioritizing important files
- compressing recent activity
- filtering noisy paths
- generating compact structured memory
Instead of sending your entire repository every session, Contora helps AI focus on what actually matters.
Especially useful for:
- long AI coding sessions
- monorepos
- expensive frontier models
- agent loops
- high-frequency AI workflows
Designed to reduce:
- token usage
- repeated context rebuilding
- unnecessary AI calls
- AI cost overhead
Persistent workspace state:
- current task
- notes
- active files
- recent activity
- Git changes
- workspace intent
- session memory
Stored locally inside:
.contora/
Set your current focus manually.
Contora continuously updates surrounding workspace context automatically.
Example:
Current Focus:
Refactor payment retry system
AI inferred goals:
- improve retry stability
- optimize error classification
- reduce duplicate requests
Contora automatically tracks:
- staged files
- modified files
- working-tree changes
and prioritizes them in AI context generation.
Large codebases generate noisy AI context.
Contora compresses workspace activity into:
- semantic summaries
- ranked priority files
- compact event history
- structured memory blocks
Designed for long-running AI workflows.
Analyze project direction using optional BYOK AI providers.
Generate:
- inferred goals
- workspace intent
- feature direction
- task grouping
Results are stored locally and reused across sessions.
Save and restore:
- open editors
- workspace state
- active memory
- project context
Your AI coding session becomes persistent.
Contora is designed to work locally first.
- No cloud sync
- No chat log scraping
- No hidden telemetry
- Workspace-owned memory
Your workspace memory stays under your control.
Bring your own API keys:
- OpenAI
- Claude
- Gemini
- DeepSeek
Used only when running optional AI commands.
API keys are stored securely in:
- VS Code SecretStorage
Never inside:
- settings.json
Workspace Activity
↓
Workspace Scanner
↓
Memory Builder
↓
Context Compression
↓
Structured Workspace Memory
↓
Export / Restore / AI Workflows
Contora begins tracking:
- active files
- recent edits
- Git changes
- workspace activity
Example:
Refactor payment retry system
Contora continuously builds:
- workspace memory
- ranked file priority
- semantic summaries
- event history
- compressed context
One click.
Contora generates:
- structured memory
- compressed workspace context
- AI-ready summaries
for your preferred model or agent.
- AI current focus
- AI inferred goals
- Workspace summary
- Active files
- Git changes
- Context notes
- Session save / restore
- Semantic summary
- Workspace intent analysis
- Context compression preview
Keep AI aware across:
- dozens of files
- multiple sessions
- evolving goals
Give your AI assistant:
- project awareness
- Git context
- active workspace intent
- compact memory
Reduce noise using:
- ignore rules
- ranking
- token budgets
- compressed workspace memory
Generate structured workspace memory for:
- agents
- workflows
- automation pipelines
- external AI tools
Reduce unnecessary token usage when using:
- GPT-5
- Claude
- Gemini
- long-context workflows
The Contora sidebar includes:
- current focus
- inferred goals
- model/runtime summary
- active files
- recent activity
- Git changes
- workspace notes
- save state
- restore editors
- session persistence
- semantic summaries
- workspace intent analysis
- compressed context previews
<workspace-root>/
├── .contoraignore
└── .contora/
├── state.json
├── events/
├── last-intent.json
└── memory/
- Local-first architecture
- No cloud workspace storage
- No hidden telemetry
- No session scraping
- BYOK optional
- Full workspace ownership
Extensions → Install from VSIX…
git clone https://github.com/frankleeeeeee/contora.git
cd contora
npm install
npm run compilePress:
F5
to launch Extension Development Host.
Key settings include:
| Setting | Description |
|---|---|
exportFormat |
markdown / json / cursor / claude / openai |
exportTokenBudget |
approximate max export tokens |
defaultAIMode |
debug / feature / refactor / review |
maxPriorityFiles |
limit ranked file count |
eventsInPrompt |
recent events included in exports |
persistEventLog |
optional JSONL event logging |
appendAiSummaryOnExport |
optional AI-generated summaries |
- TypeScript
- VS Code Extension API
- simple-git
- Workspace scanners
- Local memory builders
- Structured context adapters
src/
├── core/
├── ai/
├── scanner/
├── state/
├── ui/
├── storage/
└── env/
AI coding tools should not lose context every session.
Contora is building:
The next layer of AI-native development environments.
Future directions:
- AI timeline memory
- multi-session memory
- team workspace memory
- agent memory systems
- workspace knowledge graphs
- smarter context scheduling
- adaptive token optimization
MIT License