Go-native AI coding CLI with multi-model lens architecture.
Built by HowlerOps β achieving feature parity with leading AI coding tools while delivering Go's memory efficiency and startup speed.
# Go (recommended)
go install github.com/howlerops/oculus/cmd/oculus@latest
# npm
npm install -g @howlerops/oculus
# Binary (macOS/Linux/Windows)
# Download from https://github.com/howlerops/oculus/releases# Interactive REPL (launches TUI)
oculus
# Single prompt (non-interactive)
oculus -p "Explain this codebase"
# With specific model
oculus -m opus -p "Review this PR"
# Permission mode shortcuts
oculus --yolo "Fix all the tests"
oculus --auto-edit "Refactor the auth package"
# Ralph persistence loop
oculus --ralph "Implement auth system"
# Consensus planning
oculus --plan "Design new API layer"On first run, Oculus will open your browser for OAuth authentication with Anthropic when needed. You can also bring your own provider credentials directly, including ANTHROPIC_API_KEY, OPENAI_API_KEY, LITELLM_BASE_URL, BEDROCK_REGION / BEDROCK_ENDPOINT_FAMILY / BEDROCK_BASE_URL, or GOOGLE_API_KEY / GEMINI_API_KEY.
Oculus routes work through specialized lenses:
- Focus β Reasoning, planning, orchestration
- Scan β File search, codebase exploration, research
- Craft β Code writing, editing, command execution
Each lens can target a different model and provider.
- Anthropic β Claude Opus 4.6, Sonnet 4.6, Haiku 4.5 (streaming + extended thinking)
- OpenAI β GPT-4o, o3, and any OpenAI-compatible endpoint
- LiteLLM β OpenAI-compatible gateway routing through
LITELLM_BASE_URLorsettings.providers.litellm - Bedrock OpenAI-compatible β Amazon Bedrock
runtimeormantleOpenAI-compatible endpoints viaBEDROCK_REGION/BEDROCK_ENDPOINT_FAMILY, or an explicitBEDROCK_BASE_URL - Google AI β Native Gemini 2.5 API bridge via the official Go SDK (
google.golang.org/genai) - Ollama β Local models (Llama 3, Mistral, Qwen) via OpenAI-compatible interface
- CLI Bridges β Claude Code CLI, Codex CLI (session continuity + resume), Gemini CLI (best-effort text bridge)
- Ralph Loop (
/ralphor--ralph) β PRD-driven persistence with story tracking, acceptance criteria verification, and evidence-based progress logging - Boulder Crash Recovery (
/resume) β Automatically resume interrupted Ralph sessions with full state restoration - Consensus Planning (
/planor--plan) β Planner β Architect β Critic review loop (up to 5 rounds) - Ultrawork β Parallel dispatch with dependency DAG and model tier routing
- 5 Agent Personas β Architect, Critic, Executor, Explorer, Planner (with semi-formal reasoning protocols)
- Smart Model Routing β Complexity-aware model selection with 3-tier routing (Haiku β Sonnet β Opus)
- Category-Based Task Routing β 6 task categories with automatic classification and optimal agent assignment
- Fallback Model Chain β Auto-retry on rate limit with configurable fallback provider sequence
- Wisdom Accumulation (
/wisdom) β Cross-task learning that carries insights between sessions - Skill Auto-Injection β Keyword triggers automatically activate relevant skills (e.g., "autopilot" β autopilot mode)
Auto-detects API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY/GEMINI_API_KEY), configured LiteLLM endpoints, Bedrock runtime/mantle OpenAI-compatible endpoints, installed CLIs (Claude, Codex, Gemini), and local Ollama instances. Works without Anthropic β any provider can be primary. The /setup command recommends optimal lens configuration based on what's available.
Episode-based compaction with LCM dual-threshold engine:
- Below 70%: Zero overhead
- 70-90%: Async compaction between turns
- Above 90%: Blocking compaction with TF-IDF keyword extraction
Prompt caching via ephemeral cache control blocks for identity, tool descriptions, and user context.
Bash, File Read/Write/Edit, Glob, Grep, Agent spawning, WebSearch, WebFetch, REPL, Notebook editing, LSP (gopls), MCP (client/auth/ListResources/ReadResource), Task/Team/TodoWrite, Cron scheduling, Plan mode, Brief mode, Worktree, Config, Sleep, ToolSearch, SyntheticOutput, RemoteTrigger, AskUser, Skill runner, SendMessage, PowerShell.
/help, /model, /diff, /commit, /compact, /permissions, /config, /settings, /init, /memory, /plan, /review, /skills, /plugin, /agents, /advisors, /issue, /branch, /add-dir, /copy, /continue, /status, /doctor, /version, /setup, /lsp-install, /upgrade, /security-review, /release-notes, /pr_comments, /feedback, /onboarding, /good, /btw, /terminal-setup, /desktop, /mobile, /install-github-app, /install-slack-app, /backfill-sessions, /break-cache, /cost, /wisdom, /categories, /triggers, and session commands (/login, /logout, /resume, /session).
Built on the Charm v2 ecosystem (bubbletea v2, lipgloss v2, bubbles v2):
- Interactive model picker with provider-grouped display and pricing info
- Interactive
/settingspage with tabbed interface for runtime configuration - Permission mode picker at startup (Ask, Auto Edit, Don't Ask, Bypass All, Plan)
- Inline permission dialogs with queued approval workflow
- Loading animations with mini-dot spinners and elapsed time tracking
- Color-coded tool call badges with per-tool progress tracking
- Markdown rendering with syntax highlighting (glamour)
- Multi-line input with Ctrl+R history search and autocomplete dropdown
- Scrollable viewport, task panel, status bar, context bar, help bar
- Git branch display in status bar
- Hierarchical tree view for file path display
- Runtime permission management with
/permissionsswitching - Mouse toggle (Ctrl+M) for easy copy/paste
- Vim mode, keybindings, diff viewer
- Splash screen with ASCII branding
- 5 Permission Modes: Ask (safe default), Auto Edit (auto-approve edits), Don't Ask (allow most), Bypass All (YOLO), Plan (read-only)
- Interactive mode picker at startup with runtime switching via
/permissions - Permission pre-flight for team orchestration β validates permissions before dispatching work
- CLI shortcuts:
--yoloand--auto-editflags - Bash command safety analysis (24 dangerous patterns)
- Rule-based permission system with glob matching and priority (deny > ask > allow)
- Cost ceiling and step limits (
/cost) with real-time warnings when approaching thresholds - OAuth PKCE authentication with keychain storage
- SSRF guard on HTTP hooks
--debugflag with centralized structured logging- E2E testing harness β tmux-based test runner for full TUI integration tests
- Plugins β Loader, manager, marketplace integration
- Skills β Extensible skill system with loader
- Hooks β Lifecycle hooks with HTTP and UI integration
- MCP β Full Model Context Protocol client support
Config directory: ~/.oculus/ (auto-migrates from ~/.claude/)
Create project-specific instructions:
# Project Instructions
- Use pnpm, not npm
- Run tests with `go test ./...`
- Follow conventional commits{
"model": "gemini-2.5-flash",
"defaultMode": "default",
"providers": {
"litellm": { "baseURL": "http://localhost:4000/v1", "apiKeyEnv": "LITELLM_API_KEY" },
"bedrock": { "region": "us-east-1", "endpointFamily": "runtime", "apiKeyEnv": "BEDROCK_API_KEY" }
},
"lenses": {
"focus": { "model": "claude-opus-4-6", "provider": "anthropic" },
"scan": { "model": "gemini-2.5-flash", "provider": "gemini" },
"craft": { "model": "codex", "provider": "codex" }
}
}cmd/oculus/ CLI entry point (cobra)
pkg/
βββ api/ Anthropic streaming client (retry, errors, provider)
βββ auth/ OAuth PKCE + keychain + onboarding + trust
βββ bridge/ Multi-provider abstraction (anthropic, openai-compatible, litellm, gemini, cli, ollama)
βββ commands/ 50 slash commands (builtins, login, resume)
βββ config/ Settings, model defs, env, validation, cache, merge
βββ constants/ Models, limits, OAuth, shared constants
βββ context/ Git context, OCULUS.md/CLAUDE.md loader, system prompt (with cache control)
βββ hooks/ Hook engine (lifecycle, HTTP, settings, UI)
βββ lens/ Focus/Scan/Craft routing (manager, router, worker)
βββ memdir/ In-memory directory abstraction
βββ migrations/ Config migration helpers
βββ orchestration/ Ralph loop, consensus planning, ultrawork, agent personas
βββ plugins/ Plugin loader, manager, marketplace
βββ query/ Conversation loop + tool dispatch engine (session tracking)
βββ services/ 16 service packages:
β βββ analytics, agentsummary, bridge, compact, cost, episodes,
β β history, mcp, memory, notifications, plugins, policylimits,
β β ratelimit, sessions, tokencount, toolsummary
βββ skills/ Skill loader
βββ state/ State store + selectors
βββ task/ Background task management
βββ tool/ Tool registry + dispatch
βββ tools/ 32 tool implementations (each in own package)
βββ tui/ Bubbletea v2 terminal UI (36+ files)
β βββ model, init, update, view, messages, handlers
β βββ permission, permission_mode_picker, permissions_runtime
β βββ statusbar, contextbar, taskpanel, viewport, progress
β βββ input, autocomplete, keybindings, vim, markdown
β βββ toolcall, diff, modelpicker, settings, splash, helpbar, treeview
β βββ components/
βββ types/ Shared types (messages, tools, permissions, hooks, IDs)
βββ utils/ Utility packages:
βββ array, bash, claudemd, filestate, format, git,
β log, messages, permissions, platform, theme
Full docs: howlerops.github.io/oculus
MIT Β© HowlerOps