Skip to content

owensbla/opencode-config

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

opencode-config

An example OpenCode setup repo that captures the workflow I am using right now.

It is intentionally small:

  • my four named agents (wit, golem, kimi, max)
  • the spec and review skills I lean on most
  • a minimal opencode.jsonc
  • docs for how I run OpenCode as a lightweight always-on service on my Macs

This is not a "one true way" to use OpenCode. It is just a practical, shareable version of the flow that has been working well for me.

What is in this repo

.
├── README.md
├── assistant
├── opencode.jsonc
└── skills/
    ├── change-review/
    │   └── SKILL.md
    ├── repo-constitution/
    │   └── SKILL.md
    ├── oc-self-prompt/
    │   └── SKILL.md
    ├── spec-consensus/
    │   └── SKILL.md
    ├── spec-freeze/
    │   └── SKILL.md
    ├── spec-writer/
    │   └── SKILL.md
    └── swarm/
        └── SKILL.md

Compatibility and safety notes

This repo is a cleaned-up example of my setup, not a universal starter template.

  • The model IDs in opencode.jsonc reflect what I am using right now. If you do not have access to those exact providers or models, swap them for equivalents you do have.
  • The MCP entries assume you are comfortable with npx, uvx, and a remote Context7 MCP.
  • Keep the service inside your tailnet unless you have a very explicit reason to expose it more broadly.

Included workflow

My primary model is wit (GPT 5.4).

The usual flow looks like this:

  1. wit is my primary agent.
  2. wit asks wit, golem, kimi, and max to independently draft specs using spec-writer and repo-constitution.
  3. wit compares the outputs, then uses spec-consensus to produce a single implementation-ready spec.
  4. All agents review the spec. wit iterates until the spec is boring, narrow, and testable.
  5. I do a final human review of the spec.
  6. wit implements the frozen spec.
  7. At the end, all agents run change-review on the patch.
  8. wit iterates on feedback until validations pass.
  9. I do one more final review before commit or merge.

In practice, this gives me a pretty good balance of speed and skepticism. I get multiple drafts early, tighter contracts before code, and a second wave of pressure-testing after implementation.

How I actually prompt

My real prompts are usually dense operating briefs, not neat little template snippets.

They tend to include:

  • exact agent responsibilities, including who can implement vs who should only review
  • explicit artifact paths for specs or requirements docs
  • hard scope and out-of-scope boundaries
  • repeat-until-done loops for review and verification
  • a final human checkpoint before destructive actions like deletions or commits

The broad pattern is still the same:

  • have multiple agents produce spec input independently
  • use one pass to form consensus and freeze contracts
  • implement narrowly against the frozen spec
  • run multi-agent change review
  • iterate until tests, lint, typecheck, formatting, and approvals all pass

In other words, I do not usually talk to OpenCode like I am filling out a form. I give it a fairly opinionated operating brief and make the stop conditions explicit.

Why this setup works for me

  • wit is the coordinator, but I still use it as one of the review/spec voices too.
  • Multiple model passes help surface disagreements before code exists.
  • spec-consensus keeps the flow from turning into mushy averaged output.
  • repo-constitution and AGENTS.md keep the work anchored to small diffs and stable contracts.
  • change-review at the end catches scope creep that can sneak in during implementation.

Running OpenCode as an always-on assistant

I run OpenCode as a lightweight local service on macOS, then reach it remotely over Tailscale.

The launcher script I use locally lives at ~/.local/bin/assistant and does a few useful things:

  • starts opencode serve with nohup if it is not already running
  • binds to ASSISTANT_HOSTNAME or localhost by default
  • binds to ASSISTANT_PORT or 4096 by default
  • stores named session mappings in ~/.local/share/assistant/sessions.json
  • lets me resume a named session in the directory where it was created
  • supports restart, stop, status, sessions, and forget

The exact launcher script is included in this repo as assistant.

I usually symlink or copy that repo version into ~/.local/bin/assistant, then use that wrapper instead of calling opencode serve or opencode attach manually.

Core command in the script:

nohup opencode serve --hostname "$HOSTNAME" --port "$PORT" > /tmp/opencode.log 2>&1 &

How I use it

Start or attach normally:

assistant

Create or resume a named session for the current repo:

assistant -s my-task
assistant -s my-task --new

Check service state:

assistant status
assistant restart
assistant sessions

Remote access with Tailscale

My setup is simple:

  • I run this on Macs that stay awake, usually a Mac mini and a MacBook Pro
  • Tailscale gives me a stable private network and an easy way to hook the service up to a custom domain
  • I point OpenCode at that hostname so I can attach from other devices without exposing it publicly

If you want to copy the pattern:

  1. Install Tailscale on the Macs that will host OpenCode.
  2. Make sure those machines do not sleep when you want the service available.
  3. Pick a hostname, MagicDNS name, or domain that resolves inside your tailnet.
  4. Set ASSISTANT_HOSTNAME and ASSISTANT_PORT if you want something other than the script defaults.
  5. Start or attach through assistant instead of manually juggling serve and attach.

If you use a custom domain on top of Tailscale, I strongly recommend keeping the path tailnet-only unless you have explicit auth in front of it.

My practical rule of thumb:

  1. safest: MagicDNS or a Tailscale IP inside the tailnet only
  2. okay with care: Tailscale Serve plus HTTPS inside the tailnet
  3. highest risk: any public exposure path without an auth layer

Do not point this at 0.0.0.0 and treat it like a public web app unless you have deliberately designed for that.

Web UI and mobile terminal usage

Once the service is up behind Tailscale, I use it in two ways:

  • the web UI for general usage, monitoring, and jumping back into active work
  • a mobile terminal when I want to attach directly to a named session with assistant -s

That second path is especially nice when I already know the session I want:

assistant -s repo/feat/build-the-thing
assistant -s repo/feat/build-the-thing --new

Because the script stores the session id plus working directory mapping, I can reconnect to the right session from a phone or another machine without remembering the raw session id.

iPhone / iPad access

For quick mobile access, I usually open the OpenCode page in Safari and add it to my home screen.

That gives me a lightweight app-like entry point for:

  • checking in on long-running work
  • resuming sessions from anywhere
  • capturing ideas before I am back at a keyboard

If I want a terminal instead of the browser, I can also use a mobile terminal app over Tailscale and run assistant -s <name> directly.

It is a very low-friction setup if you already trust your Tailscale path and the host machine stays online.

Notes on the example config

  • The shared config keeps the same model routing I use for wit, golem, kimi, and max.
  • The included skills are the ones that shape the spec-first workflow the most.
  • Those skills are still pretty WIP and were mostly based on light research plus iteration in real use; I would treat them as a starting point, not a polished final system.
  • You should treat this as a base layer and adapt it to your repos, model access, and risk tolerance.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages