An example OpenCode setup repo that captures the workflow I am using right now.
It is intentionally small:
- my four named agents (
wit,golem,kimi,max) - the spec and review skills I lean on most
- a minimal
opencode.jsonc - docs for how I run OpenCode as a lightweight always-on service on my Macs
This is not a "one true way" to use OpenCode. It is just a practical, shareable version of the flow that has been working well for me.
.
├── README.md
├── assistant
├── opencode.jsonc
└── skills/
├── change-review/
│ └── SKILL.md
├── repo-constitution/
│ └── SKILL.md
├── oc-self-prompt/
│ └── SKILL.md
├── spec-consensus/
│ └── SKILL.md
├── spec-freeze/
│ └── SKILL.md
├── spec-writer/
│ └── SKILL.md
└── swarm/
└── SKILL.md
This repo is a cleaned-up example of my setup, not a universal starter template.
- The model IDs in
opencode.jsoncreflect what I am using right now. If you do not have access to those exact providers or models, swap them for equivalents you do have. - The MCP entries assume you are comfortable with
npx,uvx, and a remote Context7 MCP. - Keep the service inside your tailnet unless you have a very explicit reason to expose it more broadly.
My primary model is wit (GPT 5.4).
The usual flow looks like this:
witis my primary agent.witaskswit,golem,kimi, andmaxto independently draft specs usingspec-writerandrepo-constitution.witcompares the outputs, then usesspec-consensusto produce a single implementation-ready spec.- All agents review the spec.
wititerates until the spec is boring, narrow, and testable. - I do a final human review of the spec.
witimplements the frozen spec.- At the end, all agents run
change-reviewon the patch. wititerates on feedback until validations pass.- I do one more final review before commit or merge.
In practice, this gives me a pretty good balance of speed and skepticism. I get multiple drafts early, tighter contracts before code, and a second wave of pressure-testing after implementation.
My real prompts are usually dense operating briefs, not neat little template snippets.
They tend to include:
- exact agent responsibilities, including who can implement vs who should only review
- explicit artifact paths for specs or requirements docs
- hard scope and out-of-scope boundaries
- repeat-until-done loops for review and verification
- a final human checkpoint before destructive actions like deletions or commits
The broad pattern is still the same:
- have multiple agents produce spec input independently
- use one pass to form consensus and freeze contracts
- implement narrowly against the frozen spec
- run multi-agent change review
- iterate until tests, lint, typecheck, formatting, and approvals all pass
In other words, I do not usually talk to OpenCode like I am filling out a form. I give it a fairly opinionated operating brief and make the stop conditions explicit.
witis the coordinator, but I still use it as one of the review/spec voices too.- Multiple model passes help surface disagreements before code exists.
spec-consensuskeeps the flow from turning into mushy averaged output.repo-constitutionandAGENTS.mdkeep the work anchored to small diffs and stable contracts.change-reviewat the end catches scope creep that can sneak in during implementation.
I run OpenCode as a lightweight local service on macOS, then reach it remotely over Tailscale.
The launcher script I use locally lives at ~/.local/bin/assistant and does a few useful things:
- starts
opencode servewithnohupif it is not already running - binds to
ASSISTANT_HOSTNAMEorlocalhostby default - binds to
ASSISTANT_PORTor4096by default - stores named session mappings in
~/.local/share/assistant/sessions.json - lets me resume a named session in the directory where it was created
- supports
restart,stop,status,sessions, andforget
The exact launcher script is included in this repo as assistant.
I usually symlink or copy that repo version into ~/.local/bin/assistant, then use that wrapper instead of calling opencode serve or opencode attach manually.
Core command in the script:
nohup opencode serve --hostname "$HOSTNAME" --port "$PORT" > /tmp/opencode.log 2>&1 &Start or attach normally:
assistantCreate or resume a named session for the current repo:
assistant -s my-task
assistant -s my-task --newCheck service state:
assistant status
assistant restart
assistant sessionsMy setup is simple:
- I run this on Macs that stay awake, usually a Mac mini and a MacBook Pro
- Tailscale gives me a stable private network and an easy way to hook the service up to a custom domain
- I point OpenCode at that hostname so I can attach from other devices without exposing it publicly
If you want to copy the pattern:
- Install Tailscale on the Macs that will host OpenCode.
- Make sure those machines do not sleep when you want the service available.
- Pick a hostname, MagicDNS name, or domain that resolves inside your tailnet.
- Set
ASSISTANT_HOSTNAMEandASSISTANT_PORTif you want something other than the script defaults. - Start or attach through
assistantinstead of manually jugglingserveandattach.
If you use a custom domain on top of Tailscale, I strongly recommend keeping the path tailnet-only unless you have explicit auth in front of it.
My practical rule of thumb:
- safest: MagicDNS or a Tailscale IP inside the tailnet only
- okay with care: Tailscale Serve plus HTTPS inside the tailnet
- highest risk: any public exposure path without an auth layer
Do not point this at 0.0.0.0 and treat it like a public web app unless you have deliberately designed for that.
Once the service is up behind Tailscale, I use it in two ways:
- the web UI for general usage, monitoring, and jumping back into active work
- a mobile terminal when I want to attach directly to a named session with
assistant -s
That second path is especially nice when I already know the session I want:
assistant -s repo/feat/build-the-thing
assistant -s repo/feat/build-the-thing --newBecause the script stores the session id plus working directory mapping, I can reconnect to the right session from a phone or another machine without remembering the raw session id.
For quick mobile access, I usually open the OpenCode page in Safari and add it to my home screen.
That gives me a lightweight app-like entry point for:
- checking in on long-running work
- resuming sessions from anywhere
- capturing ideas before I am back at a keyboard
If I want a terminal instead of the browser, I can also use a mobile terminal app over Tailscale and run assistant -s <name> directly.
It is a very low-friction setup if you already trust your Tailscale path and the host machine stays online.
- The shared config keeps the same model routing I use for
wit,golem,kimi, andmax. - The included skills are the ones that shape the spec-first workflow the most.
- Those skills are still pretty WIP and were mostly based on light research plus iteration in real use; I would treat them as a starting point, not a polished final system.
- You should treat this as a base layer and adapt it to your repos, model access, and risk tolerance.