Skip to content

Releases: synapt-dev/codi

v0.18.2

28 Jan 17:06
Immutable release. Only release title and notes can be modified.
4b43abb

Choose a tag to compare

Documentation Updates

This release addresses documentation inconsistencies across the project:

Changes

  • Version bump: Updated from 0.18.1 to 0.18.2
  • Website version (docs/index.html): Updated from 0.14.0 to 0.18.2
  • Security policy (SECURITY.md): Updated supported versions table to include 0.18.x
  • Developer docs (CLAUDE.md, CODI.md): Removed references to deprecated Ollama Cloud provider

Files Modified

  • package.json: Bumped version to 0.18.2
  • src/version.ts: Bumped version to 0.18.2
  • docs/index.html: Updated version badge to 0.18.2
  • SECURITY.md: Added 0.18.x to supported versions table
  • CLAUDE.md: Removed Ollama Cloud provider reference (deprecated in v0.17.0)
  • CODI.md: Removed Ollama Cloud provider reference (deprecated in v0.17.0)

Notes

This is a patch release to ensure all documentation reflects the current version and provider state correctly. The Ollama Cloud provider was deprecated in version 0.17.0; users should use the regular ollama provider with OLLAMA_HOST environment variable instead.

v0.18.1: Performance optimizations and npm publishing

27 Jan 23:36
Immutable release. Only release title and notes can be modified.
75d8c81

Choose a tag to compare

What's New

Performance Optimizations

  • File content cache: LRU cache with mtime validation for file operations (2-4x speedup for multi-edits)
  • Tool definition cache: Eliminates redundant schema serialization in agent loop
  • Token count cache: Infrastructure for session-level token counting optimization
  • Directory listing optimization: Uses readdirWithFileTypes (10-100x faster for large directories)
  • Binary file detection: Skips binary files in grep operations
  • Embedding cache: SHA-256 hash keys with 1-hour TTL for RAG embeddings
  • Vector query cache: 5-minute TTL for repeated semantic searches

npm Publishing

  • Package now published as codi-ai on npm
  • GitHub Actions workflow for automated npm trusted publishing with OIDC

Installation

npm install -g codi-ai

Full Changelog: v0.17.1...v0.18.1

v0.17.0: Workflow System, Context Debug, Enhanced Web Search

26 Jan 20:21
Immutable release. Only release title and notes can be modified.
dfaec42

Choose a tag to compare

Highlights

🚀 Production-Ready Workflow System (Phase 8)

The interactive workflow system is now complete with:

  • AI-assisted workflow building
  • Multi-step pipelines with variable substitution
  • Git and PR action steps
  • Comprehensive test coverage

🔍 Context Debug Command

New /compact debug subcommand for inspecting context window state:

  • View message counts and token estimates
  • Analyze working set and indexed files
  • Debug context compaction behavior

🌐 Enhanced Web Search

Multi-engine support with improved reliability:

  • DuckDuckGo, Brave, and fallback engines
  • Better result parsing and formatting
  • Configurable search preferences

Features

  • feat(workflow): Complete Phase 8 - Production-Ready Workflow System (#173)
  • feat(compact): Add debug subcommand for context window inspection (#179)
  • feat: Implement enhanced web search phase 2 features (#170)
  • feat: Implement enhanced web search multi-engine support (#165)
  • feat: Add memory monitoring and proactive compaction (#167)
  • feat(workflow): Phase 7 AI-assisted workflow builder (#166)
  • feat(workflow): Implement Phase 6 built-in actions (#162)

Improvements

  • refactor: Address technical debt with modularization and type safety (#181)
  • fix: Add debug logging to silenced error handlers (#185)
  • feat(providers): Remove ollama-cloud provider, consolidate under ollama (#182)
  • Evolution: Symbol Index Multi-Language Extension (#172)

Fixes

  • fix(ui): Prevent UI freeze during context compaction (#174)
  • fix(workflow): Resolve failing PR review E2E tests (#180)
  • fix(ink-ui): Update model display when provider changes (#175)
  • fix(orchestrate): Graceful IPC disconnect to prevent race condition (#164)
  • fix(ink-ui): Add tool call display and improve visual stability (#163)

Breaking Changes

  • The ollama-cloud provider has been removed. Use --provider ollama with OLLAMA_HOST=https://ollama.com instead.

Full Changelog: v0.16.0...v0.17.0

v0.16.0: Debug Bridge & Global Model Maps

23 Jan 13:37
Immutable release. Only release title and notes can be modified.
a5ddca2

Choose a tag to compare

Highlights

🔧 Debug Bridge

Stream session events to a JSONL file for live debugging of Codi sessions. Watch API calls, tool executions, and responses in real-time.

# Start codi with debug bridge
codi --debug-bridge

# In another terminal, watch events
tail -f ~/.codi/debug/events.jsonl | jq .

Events captured: session_start, api_request, api_response, tool_call_start, tool_call_end, tool_result, context_compaction, and more.

🌐 Global Model Maps

Define model aliases in ~/.codi/models.yaml that work across all projects:

version: "1"
models:
  coder:
    provider: anthropic
    model: claude-sonnet-4-20250514
  fast:
    provider: ollama
    model: llama3.2

Then use /switch coder or /switch fast in any project.

➕ Interactive Model Addition

Add models to your model map interactively:

/modelmap add coder anthropic claude-sonnet-4 "My coding model"
/modelmap add --global fast ollama llama3.2 "Fast local model"

What's Changed

  • feat: implement debug bridge for live session debugging (#101)
  • feat: add interactive model addition to model map (#100)
  • feat: support global model maps (~/.codi/models.yaml) (#99)

Full Changelog: v0.15.0...v0.16.0

v0.15.0: Production Readiness Release

23 Jan 01:54
Immutable release. Only release title and notes can be modified.
396b7f3

Choose a tag to compare

Production Readiness Release

This release completes the production readiness plan, making Codi enterprise-grade reliable and secure.

🔒 Security (Tier 1)

  • Fixed 7 dependency vulnerabilities
  • Added path traversal protection for all file tools
  • Database cleanup on exit to prevent corruption
  • Memory bounds: 500 message limit, 1-hour wall-clock timeout

🚀 CI/CD & Publishing (Tier 2)

  • Added macOS and Windows CI runners
  • npm publish workflow for tagged releases
  • 76 new agent unit tests (80%+ coverage)
  • .env.example for environment documentation

⚡ Reliability (Tier 3)

  • Concurrency safety: Semaphore for parallel tools (max 8)
  • Rate limiter backpressure (max 100 queued requests)
  • Graceful shutdown with 5-second timeout
  • 74 new command unit tests
  • Improved README with troubleshooting guide

🎯 Performance & Observability (Tier 4)

  • Token count caching (WeakMap-based, avoids O(N²))
  • Gzip compression for tool result cache
  • Parallel Ollama embedding requests
  • SQLite VACUUM after index rebuild
  • Comprehensive threat model in SECURITY.md

Test Coverage

  • 1781 tests passing
  • Agent: 80%+
  • Tools: 95%
  • Commands: 60%+

Full Changelog: v0.14.1...v0.15.0

v0.14.0: Multi-Agent Orchestration

20 Jan 14:46
Immutable release. Only release title and notes can be modified.
5000a31

Choose a tag to compare

Multi-Agent Orchestration

Run multiple AI agents in parallel, each in an isolated git worktree, with permission requests bubbling up to the commander for human approval.

New Commands

Command Description
/delegate <branch> <task> Spawn a worker agent in a new worktree
/workers List active workers and their status
/workers cancel <id> Cancel a running worker
/worktrees List all managed worktrees
/worktrees cleanup Remove completed worktrees

How It Works

┌─────────────────────────────────────────────┐
│           Commander (has readline)          │
│  ┌─────────────────────────────────────┐   │
│  │  Unix Socket Server                  │   │
│  │  ~/.codi/orchestrator.sock          │   │
│  └──────────┬──────────────┬───────────┘   │
└─────────────┼──────────────┼───────────────┘
              │              │
       ┌──────┴───┐    ┌─────┴────┐
       │ Worker 1 │    │ Worker 2 │
       │ (IPC     │    │ (IPC     │
       │  Client) │    │  Client) │
       └──────────┘    └──────────┘
        Worktree A      Worktree B
  1. Commander spawns workers via /delegate
  2. Each worker runs in an isolated git worktree
  3. When a worker needs permission (write file, run command), it sends a request via IPC
  4. Commander prompts the user and sends the response back
  5. Worker continues with the approved/denied result

CLI Child Mode Flags

For advanced use cases, you can run Codi as a child agent:

  • --child-mode - Run as child agent (connects to commander via IPC)
  • --socket-path <path> - IPC socket path for permission routing
  • --child-id <id> - Unique worker identifier
  • --child-task <task> - Task description for the worker

License Change

Codi is now dual-licensed under AGPL-3.0 (open source) and a commercial license. See LICENSING.md for details.

  • Open source use: AGPL-3.0 (copyleft, share-alike)
  • Commercial use: Contact for licensing

Other Improvements

  • Improved worker agent prompts with structured context
  • Improved summarization prompts for better context compression
  • Added feature completeness checklist to PR review process
  • Updated /help with new commands

Tested Providers

  • ✅ Anthropic (Claude)
  • ✅ OpenAI (GPT-4, GPT-5)
  • ✅ Ollama (glm-4.7:cloud, qwen3-coder:480b-cloud)

Full Changelog: v0.9.1...v0.14.0

v0.13.0: GitHub Pages Enhancement & Ollama GLM Default

18 Jan 15:25
Immutable release. Only release title and notes can be modified.
f66b98c

Choose a tag to compare

Features

  • GitHub Pages Enhancement: Complete redesign of documentation site

    • Added demo GIF with terminal window styling and animated gradient border
    • Added badges for version, license, Node requirement, and default model
    • Added Tools section showing all 12 built-in tools
    • Added Usage & Models command section
    • Visual improvements: animations, glow effects, smooth scrolling
    • Open Graph meta tags for better social sharing
    • Improved mobile responsiveness
  • Default Ollama Model: Changed default from llama3.2 to glm-4.7:cloud

    • Applies to both local Ollama and Ollama Cloud providers

Configuration

  • Updated codi-models.yaml with current defaults:
    • Opus updated to claude-opus-4-5-20251101
    • Added gpt-5 and gpt-5-nano for OpenAI
    • Renamed llama3 to glm using glm-4.7:cloud
    • Fallback chain now prioritizes opus → sonnet → haiku → gpt-5 → glm

Installation

git clone https://github.com/laynepenney/codi.git
cd codi
corepack enable
pnpm install && pnpm build
pnpm link --global

Full Changelog: v0.12.0...v0.13.0

v0.12.0: Default to Opus 4.5

18 Jan 14:42
Immutable release. Only release title and notes can be modified.
26517fc

Choose a tag to compare

What's New

Features

  • Default to Opus 4.5: Anthropic provider now defaults to Claude Opus 4.5 (claude-opus-4-5-20251101) instead of Sonnet 4 (#47)

Bug Fixes

  • Fixed handling of malformed JSON with trailing quotes after numbers (#46)

Full Changelog

v0.11.0...v0.12.0

v0.11.0: CLI shortcuts, RAG improvements, documentation updates

18 Jan 14:26
Immutable release. Only release title and notes can be modified.
7a4a03f

Choose a tag to compare

What's New

Features

  • CLI shortcuts: Added ! prefix for quick bash commands and ? prefix for help search (#43)
  • RAG default to Ollama: Auto mode now defaults to Ollama for embeddings (free/local) (#39)

Improvements

  • RAG indexing spinner: Visual feedback during RAG indexing (#41)
  • Comprehensive README update: Added missing CLI options, commands, tools, and configuration documentation (#42)
  • PR process: Updated dev docs to require tests to pass before merging (#44)

Bug Fixes

  • Fixed /setup alias conflict with /init command
  • Fixed RAG embeddings test to match Ollama-preferred behavior
  • Fixed onError callback to receive Error objects instead of strings
  • Fixed getRegisteredCommandsgetAllCommands build error

Full Changelog

v0.10.0...v0.11.0

v0.10.0: Dynamic Context Config

18 Jan 13:49
Immutable release. Only release title and notes can be modified.
29cf7e0

Choose a tag to compare

What's New

Dynamic Context Window Lookup

  • Ollama models now query /api/show for actual context window size
  • Results are cached to avoid repeated API calls
  • Works with both ollama and ollama-cloud providers

Tier-Based Context Configuration

  • 4 tiers based on model context window: small (0-16k), medium (16k-64k), large (64k-200k), xlarge (200k+)
  • Each tier has optimized settings for context usage, safety buffers, and tool result limits
  • Configuration scales automatically when switching providers

Enhanced /status Command

  • Visual progress bar showing context usage percentage
  • Token breakdown by category (messages, system prompt, tools)
  • Context budget info including tier name, output reserve, and safety buffer
  • Message breakdown by role (user, assistant, tool results)

Full Changelog

v0.9.1...v0.10.0