My personalized OpenCode configuration.
opencode.json— Main OpenCode configuration file. Defines MCP servers, permissions, and settings.AGENTS.md— Global agent instructions (e.g., Context7 MCP usage guidelines).commands/review-pr.md— Customreview-prcommand for handling PR checks and review feedback.package.json— Dependencies required by this config (e.g.,@opencode-ai/plugin,@slkiser/opencode-quota).plugins/opencode-voice/— Vendored@renjfk/opencode-voicewith Linux audio fixes applied.tui.json— TUI-specific configuration (e.g., voice plugin).
- Clone this repo to your machine.
- Symlink or copy the files into your OpenCode config directory (usually
~/.config/opencode/):ln -s $(pwd)/opencode.json ~/.config/opencode/opencode.json ln -s $(pwd)/AGENTS.md ~/.config/opencode/AGENTS.md ln -s $(pwd)/commands ~/.config/opencode/commands
- Update
opencode.jsonto replace machine-specific paths with paths valid on your system.
Note: The
excalidrawMCP server inopencode.jsoncontains absolute paths specific to the original machine. Update thecommandarray paths (nodebinary andmcp_excalidrawscript) to match your environment.
Quota, usage, and token visibility for OpenCode with zero context window pollution.
- Shows popup quota toasts after assistant responses
- Adds TUI sidebar panel with quota rows
- Provides
/quota,/quota_status, and/tokens_*commands - Supports providers: GitHub Copilot, OpenAI, Cursor, Anthropic, and more
After syncing this config and installing dependencies, restart OpenCode and run /quota_status to verify.
Speech-to-text and text-to-speech plugin for OpenCode. This repo includes a vendored copy under plugins/opencode-voice/ with patches for Linux audio device enumeration and recording (the upstream only supports macOS CoreAudio).
- STT: Record voice prompts with local Whisper transcription, normalized by an LLM
- TTS: Hear assistant responses spoken aloud via Piper TTS
- Keybinds:
ctrl+rto record,ctrl+xthensto speak last response,ctrl+xthenvto toggle auto TTS
The plugin requires system-level binaries that are not installed by npm:
- sox — for audio recording and playback
- whisper-cli — for local speech-to-text transcription
- piper — for local text-to-speech synthesis
- Voice models — Whisper and Piper voice model files
A convenience script is provided to automate this on Debian/Ubuntu:
./scripts/setup-voice.shNote: The script builds
whisper-clifrom source and downloads models (~650 MB total). It also requiresuv(orpipx) for installing Piper.
The tui.json in this repo configures the voice plugin with Anthropic as the normalization LLM. Ensure ANTHROPIC_API_KEY is set in your environment. You can change the endpoint/model in tui.json to any OpenAI-compatible API (Ollama, vLLM, etc.).
This repository is configured to never commit sensitive data:
.envfiles are ignored.- No API keys, tokens, or credentials are stored in tracked files.
- Environment variables in
opencode.jsonare limited to non-sensitive values (e.g.,EXPRESS_SERVER_URL=http://localhost:3000).
If you add MCP servers that require secrets, use environment variables or a local .env file (which is gitignored) rather than hardcoding values.