A powerful command-line interface for interacting with multiple AI providers (Claude, GPT, Ollama) built with clean architecture principles.
- 🤖 Support for multiple AI providers (Claude/Anthropic, OpenAI, Ollama, Generic OpenAI-compatible APIs)
- 🎨 Beautiful terminal UI built with React/Ink
- 💬 Interactive conversation management with history
- 🛠️ Tool execution support (shell commands, file operations)
- 📝 Configuration wizard for easy setup
- 🏗️ Clean 3-layer architecture (CLI → Core → Infrastructure)
- 🔒 Type-safe with TypeScript
- 💾 Persistent conversation history
- 📋 Built-in Todos tracking and management
- ⌨️ Layer-based keyboard shortcuts system
- 🔐 Permission modes (MVP/Interactive) for tool execution
- 🎯 Slash commands with auto-suggestions
- 🔌 Extensible integrations (VS Code, MCP, A2A)
npm install --global codeh-cliOr for development:
git clone <repository-url>
cd codeh-cli
npm install
npm run build
npm linkConfigure via environment variables (highest priority):
# Required: Provider Selection
export CODEH_PROVIDER=anthropic # anthropic | openai | ollama | generic
# Required: Model Name
export CODEH_MODEL=claude-3-5-sonnet-20241022
# Required: API Base URL
export CODEH_BASE_URL=https://api.anthropic.com
# Required: API Key (not needed for Ollama)
export CODEH_API_KEY=sk-ant-...
# Optional: Max Tokens (default: 4096)
export CODEH_MAX_TOKEN=4096
# Optional: Temperature (default: 0.7)
export CODEH_TEMPERATURE=0.7Example configurations:
# Anthropic/Claude
export CODEH_PROVIDER=anthropic
export CODEH_MODEL=claude-3-5-sonnet-20241022
export CODEH_BASE_URL=https://api.anthropic.com
export CODEH_API_KEY=sk-ant-...
# OpenAI/GPT
export CODEH_PROVIDER=openai
export CODEH_MODEL=gpt-4
export CODEH_BASE_URL=https://api.openai.com
export CODEH_API_KEY=sk-...
# Ollama (local - no API key needed)
export CODEH_PROVIDER=ollama
export CODEH_MODEL=llama2
export CODEH_BASE_URL=http://localhost:11434
# Generic OpenAI-compatible API
export CODEH_PROVIDER=generic
export CODEH_MODEL=your-model
export CODEH_BASE_URL=https://your-api.com
export CODEH_API_KEY=your-keyOr use the interactive configuration wizard:
codeh configConfiguration is saved to ~/.codeh/configs.json.
# Start interactive chat
codeh
# Show welcome screen
codeh welcome
# Open configuration wizard
codeh config
# Show help
codeh --helpThis project follows Clean Architecture with 3 layers:
- Location:
source/cli/ - Purpose: User interface and interaction
- Components:
components/- Atomic Design components (atoms, molecules, organisms)atoms/- Button, Spinner, ProgressBar, StatusIndicator, Logomolecules/- InputBox, MessageBubble, MarkdownText, ToolCallDisplay, ToolResultDisplayorganisms/- ConversationArea, TodosDisplay, SlashSuggestions, Navigation, Footer, Card
screens/- Main UI screens (Home, Welcome, Config)presenters/- Presentation logic (separates UI from business logic)hooks/- Custom React hooks (useHomeLogic, useExitConfirmation)app.tsx- Root component with ShortcutProvidercli.tsx- Entry point
- Location:
source/core/ - Purpose: Business rules and domain logic
- Components:
domain/- Domain models, value objects, and interfacesmodels/- Message, Conversation, Turn, Configuration, Todo, ToolExecutionContext, UpgradeInfovalueObjects/- Provider, ModelInfointerfaces/- IApiClient, IConfigRepository, etc.
application/- Application services and orchestratorsCodehClient.ts- Main orchestrator for AI interactionsCodehChat.ts- Conversation managementToolExecutionOrchestrator.ts- Tool execution workflow orchestrationservices/- Input classifier, output formatter, etc.
tools/- Tool system (shell execution, file operations)input/- Keyboard shortcuts system (ShortcutManager, ShortcutContext)di/- Dependency injection container
- Location:
source/infrastructure/ - Purpose: External integrations and data access
- Components:
api/- API clients for different providersclients/- SDK Adapters using official SDKs (@anthropic-ai/sdk, openai, ollama)AnthropicSDKAdapter.ts- Official Anthropic SDK wrapperOpenAISDKAdapter.ts- Official OpenAI SDK wrapperOllamaSDKAdapter.ts- Official Ollama SDK wrapperGenericSDKAdapter.ts- OpenAI SDK for generic OpenAI-compatible APIs
ApiClientFactory.ts- Factory for creating SDK-based clientsHttpClient.ts- Low-level HTTP client (for edge cases)
config/- Configuration repositoriesEnvConfigRepository.ts- Environment variablesFileConfigRepository.ts- File-based configConfigLoader.ts- Config merging strategy
permissions/- Permission mode management (MVP/Interactive)PermissionModeManager.ts- Runtime permission mode switching
session/- Session management and persistencehistory/- Conversation history persistenceintegrations/- External tool integrationsvscode/- VS Code extension integrationmcp/- Model Context Protocol clienta2a/- Agent-to-Agent server
filesystem/- File operationsprocess/- Shell command execution
source/
├── cli/ # LAYER 1: Presentation
│ ├── components/
│ │ ├── atoms/ # Basic UI elements (Button, Spinner, ProgressBar)
│ │ ├── molecules/ # Composite components (InputBox, MessageBubble)
│ │ └── organisms/ # Complex components (TodosDisplay, SlashSuggestions, Footer)
│ ├── screens/ # Main screens (Home, Welcome, Config)
│ ├── presenters/ # Presentation logic
│ ├── hooks/ # Custom hooks (useHomeLogic, useExitConfirmation)
│ ├── app.tsx # Root component with ShortcutProvider
│ └── cli.tsx # Entry point
├── core/ # LAYER 2: Business Logic
│ ├── domain/
│ │ ├── models/ # Domain entities (Message, Todo, Turn, ToolExecutionContext)
│ │ ├── valueObjects/ # Value objects (Provider, ModelInfo)
│ │ └── interfaces/ # Contracts (IApiClient, IConfigRepository)
│ ├── application/ # Application services
│ │ ├── CodehClient.ts # Main orchestrator
│ │ ├── CodehChat.ts # Conversation management
│ │ └── ToolExecutionOrchestrator.ts # Tool execution workflow
│ ├── tools/ # Tool system (FileOps, Shell)
│ ├── input/ # Keyboard shortcuts (ShortcutManager, ShortcutContext)
│ └── di/ # Dependency injection container
└── infrastructure/ # LAYER 3: External Services
├── api/ # SDK Adapters (@anthropic-ai/sdk, openai, ollama)
├── config/ # Configuration (EnvConfig, FileConfig, ConfigLoader)
├── permissions/ # Permission mode management (PermissionModeManager)
├── session/ # Session management
├── history/ # History persistence
├── integrations/ # External integrations (vscode, mcp, a2a)
├── filesystem/ # File operations
└── process/ # Process execution
# Full build (TypeScript + Babel)
npm run build
# TypeScript only
npm run build:ts
# Babel only
npm run build:babel
# Watch mode
npm run devnpm testnpm run lintnpm startAll providers use official SDKs for better reliability and automatic updates:
| Provider | SDK Package | API Key Required | Local | Streaming |
|---|---|---|---|---|
| Anthropic (Claude) | @anthropic-ai/sdk |
✅ | ✅ | |
| OpenAI (GPT) | openai |
✅ | ✅ | |
| Ollama | ollama |
✅ | ✅ | |
| Generic OpenAI-compatible | openai (custom URL) |
✅ | Depends | ✅ |
🎉 Version 2.0 migrated from custom HTTP clients to official provider SDKs:
Benefits:
- ✅ Fixed HTTP 413 errors with better request handling
- ✅ Automatic retry logic built into official SDKs
- ✅ Better error messages directly from provider SDKs
- ✅ Type safety improvements with official TypeScript definitions
- ✅ Future-proof - automatic updates from providers
- ✅ Reduced maintenance - no custom HTTP client code
Supported Generic APIs:
- LiteLLM - Unified API for 100+ LLMs
- Google Gemini OpenAI compatibility
- LM Studio - Local models with OpenAI API
- ai.megallm.io - Vietnamese LLM provider
- Any OpenAI-compatible API endpoint
Track AI-generated tasks and subtasks directly in the CLI:
// Domain model
class Todo {
id: string;
content: string;
status: 'pending' | 'in_progress' | 'completed';
timestamp: Date;
}Features:
- Real-time progress tracking with visual indicators
- Status-based grouping (In Progress, Pending, Completed)
- Progress bar showing overall completion
- Automatic parsing from AI responses
Control tool execution with two modes:
- MVP Mode (YOLO): Auto-approve all tool executions - fast development workflow
- Interactive Mode: Require user approval before executing tools - safe production workflow
Toggle between modes with Shift+Tab during runtime. Mode is displayed in the footer.
Layer-based keyboard shortcut management:
useShortcut({
key: 'shift+tab',
handler: () => toggleMode(),
layer: 'input', // or 'screen' or 'global'
description: 'Toggle permission mode',
source: 'Home'
});Features:
- Layer-based priority system (input > screen > global)
- Conditional shortcuts with
enabledfunction - Centralized shortcut management
- Conflict detection and resolution
Quick actions via command palette:
- Type
/to show available commands - Fuzzy search and auto-suggestions
- Tab or Enter to select
- Command history
Extend CODEH with external tools:
- VS Code Extension: Bidirectional communication with VS Code
- MCP Client: Connect to Model Context Protocol servers
- A2A Server: Expose CODEH as an agent-to-agent service
- React + Ink: Terminal UI framework
- TypeScript: Type safety
- Babel: Transpilation
- Dependency Injection: Custom DI container
- Clean Architecture: 3-layer separation
- Atomic Design: Component organization
- Presenter Pattern: Business logic separation
- Immutable Domain Models: Pure functional domain layer
MIT