Opinionated workflows for Claude Code — from brainstorm to deploy.
Install · Commands · Workflows · Docs
Claude Code is already powerful. The real leverage, though, isn't the model — it's the harness around it: how you structure context, workflows, guardrails, and specialized personas so every session produces senior-quality work by default.
dog_stack packages that harness. It is a plugin that loads 5 core workflows, 57 slash commands, 19 specialized agents, 77 context-aware skills, and a set of rules and hooks that fire automatically — so you stop rewriting the same planning, review, and commit prompts every session.
It is built for anyone who works with Claude Code daily and has ever wished it would always plan before coding, always review before the PR, and always verify before saying "done."
Full index of everything included: docs/index.md
In any Claude Code session, run:
/plugin marketplace add rogeriojunior31/dog_stack
/plugin install dog@dog_stack
The first command registers this repo as a Claude Code marketplace. The second installs the dog plugin from it. Claude Code pulls the latest tagged release automatically — no Git, no symlinks, no path configuration.
To update later:
/plugin marketplace update dog_stack
git clone https://github.com/rogeriojunior31/dog_stack.git
cd dog_stack
/plugin marketplace add .
/plugin install dog@dog_stack
Editing files under the cloned directory takes effect on the next session reload.
Once installed, Claude Code loads every component automatically at the start of each session.
Detailed installation and troubleshooting: docs/installation.md
Three small examples that show how dog_stack feels day to day. Each one is a real command you can run after installing.
/dog:brainstorming add rate limiting to the API
Before we implement anything, I need to understand the scope:
1. Which endpoints need rate limiting — all public routes, or just auth?
2. What limit are you targeting — requests/second or requests/minute?
3. How should exceeded limits behave — 429 with retry-after, or queue?
(waits for your answers before proposing any approach)
Brainstorming never writes code. It asks one clarifying question at a time until the idea is a clear spec.
/dog:plan add a file upload endpoint
Requirements restated:
- POST /upload accepting multipart/form-data
- Store files in object storage, not local disk
- Return a signed URL to the uploaded file
Risks:
- File size limit — needs an explicit cap
- MIME type validation — avoid arbitrary uploads
- Virus scanning — out of scope?
Steps:
1. Add multipart handler to the router
2. Integrate object storage client
3. Return signed URL with expiration
Waiting for your confirmation before touching any code.
/dog:plan reformulates the request, surfaces risks, proposes steps, and waits. It never starts writing before you approve.
/dog:prp-prd → Problem-first PRD with clarifying questions
/dog:prp-plan → Implementation plan grounded in the codebase
/dog:prp-implement → Step-by-step execution with validation loops
/dog:review → Multi-axis review before PR
/dog:prp-pr → GitHub PR with change analysis and auto-push
Five commands, one coherent pipeline. You stay in the loop at every gate.
GIFs for each example coming soon.
Full reference for every command: docs/commands.md · docs/quick-reference.md
dog_stack is built around five opinionated workflows — one for each distinct phase of serious technical work. Learn these and you have the map.
- Use when: starting a serious feature that touches multiple files.
- Skip if: 1-line bug fix — use
/dog:plandirectly. - Example:
/dog:prp-prd "add global rate limiting to the API" - Flow:
prp-prd→prp-plan→prp-implement→review→prp-pr
Details: docs/workflows.md
- Use when: building code you need to trust — business logic, algorithms, anything where "it seems to work" isn't enough.
- Skip if: throwaway script or one-off migration.
- Example:
/dog:tdd "implement a retry decorator with exponential backoff" - Flow:
plan→tdd(RED → GREEN → REFACTOR) →verify→review
Details: docs/workflows.md
- Use when: every change, before it lands on the main branch.
- Skip if: never.
- Example:
/dog:reviewfor local uncommitted changes, or/dog:review-pr 123for an open PR. - Flow: detects language, then dispatches
code-reviewer,security-reviewer, and the right language specialist in parallel.
Details: docs/workflows.md
- Use when: taking merged work all the way to production.
- Skip if: working on a private branch with no deploy target.
- Example:
/dog:shipto bump version, run tests, and open the PR;/dog:landto merge, wait for CI, and verify canary health. - Flow:
ship→land→ (optional)shipping-and-launchfor critical releases.
Details: docs/workflows.md
- Use when: a bug whose cause isn't obvious after five minutes of reading the code.
- Skip if: the fix is clear — just fix it.
- Example:
/dog:deep-dive "users report intermittent 500s on upload after the last deploy" - Flow:
systematic-debugging→trace(parallel competing hypotheses) →deep-interviewto crystallize findings.
Details: docs/workflows.md
Some tasks don't fit a one-pass workflow — they have to iterate until a quality bar is met, or fan out across many independent subtasks at once. dog_stack ships five workflows built for exactly that.
Before reading the five, internalize one diagram. It explains why ultrawork is listed here even though it doesn't loop on its own:
autopilot ⊃ ralph ⊃ ultrawork
│ │ │
│ │ └── parallel execution engine (fans out, returns)
│ └── persistence loop (re-invokes itself until every story passes)
└── full autonomous lifecycle (expand → plan → execute → QA → validate)
ultrawork is the parallel engine. ralph wraps it to add persistence. autopilot wraps ralph to add the full lifecycle around it. When you reach for a "loop" command, you are almost always reaching for one of these three layers.
- Use when: a PRD has N stories that must all reach a
passes: trueverdict from a reviewer agent; completion is non-negotiable. - Skip if: you only need one pass — use
/dog:prp-implementinstead. - Example:
/dog:ralph path/to/prd.md - Termination condition: every story in the PRD returns
passes: truefrom the reviewer. The loop re-invokes itself against the same PRD until that universal-pass state is reached.
Details: docs/workflows.md
- Use when: you have multiple independent tasks that can run at the same time, and you want model-tier routing (Opus for hard tasks, Sonnet for routine, Haiku for trivial) to keep throughput high.
- Skip if: the tasks depend on each other — parallelism will deadlock or produce conflicts.
- Example:
/dog:ultrawork "refactor auth module, update docs, add rate-limit tests" - Termination condition: every parallel branch returns.
ultraworkdoes not retry or persist on its own — it terminates once all fired agents finish. Wrap it inralphorautopilotwhen you need "must complete" semantics.
Details: docs/workflows.md
- Use when: the work is too large for a single agent and breaks down naturally into 3-20 coordinated sub-tasks with shared context (large refactors, multi-module features).
- Skip if: the task fits a single agent — coordination overhead is not worth it.
- Example:
/dog:team "migrate the API from Express to Fastify"or, for persistent execution,/dog:team ralph "..." - Termination condition: the pipeline completes all stages:
team-plan → team-prd → team-exec → team-verify → team-fix. With theralphmodifier, the fix stage re-enters verify until every sub-task passes.
Details: docs/workflows.md
- Use when: you want to take an idea all the way to verified code with zero intervention — expansion, planning, execution, QA, validation.
- Skip if: you want to stay in the loop at the planning or review gates; use the PRP workflow for supervised execution.
- Example:
/dog:autopilot "add global rate limiting to the API" - Termination condition: the validation stage returns a clean verdict from the multi-reviewer loop. QA retries up to 5 times; if still failing, autopilot escalates instead of looping forever. Internally: Expansion → Planning → Execution (ralph + ultrawork) → QA (≤5x) → Validation.
Details: docs/workflows.md (Wf 20 — Multi-agent Orchestration)
- Use when: a medium-complexity feature where you want discipline gates baked into the flow — spec-first, build-with-TDD, review, simplify, ship.
- Skip if: a throwaway script, or when you already have a dedicated review flow like PRP.
- Example: start with
/dog:spec "feature", then walk through/dog:build→/dog:tdd→/dog:review→/dog:code-simplify→/dog:ship. - Termination condition: each internal step passes its own validation gate. The sequence ends at
shipwhen tests pass and the PR is opened. Any gate failure loops back to the step that owns the fix.
Details: docs/workflows.md (Wf 23 — Osmani Quality Loop)
Deeper dives: docs/workflows.md Wf 20 (Multi-agent Orchestration) and Wf 23 (Osmani Quality Loop).
A situation-first lookup. Scan for what you need, jump to the command.
| Situation | Command |
|---|---|
| I have a vague idea and need to clarify it | /dog:brainstorming |
| I want a structured spec before coding | /dog:spec |
| Serious feature, multiple files | /dog:prp-plan |
| Quick, tactical fix | /dog:plan |
| Multi-session or multi-PR project | /dog:blueprint |
| Situation | Command |
|---|---|
| TDD with upfront planning | /dog:tdd |
| Strict TDD cycle, no planning overhead | /dog:test-driven-development |
| One file at a time, validated between steps | /dog:incremental-implementation |
| Guided feature development grounded in the codebase | /dog:feature |
| Situation | Command |
|---|---|
| Review local uncommitted changes | /dog:review |
| Review an existing GitHub PR | /dog:review-pr <number> |
| Language-specific review | /dog:ts-review · /dog:py-review · /dog:go-review · /dog:rust-review |
| Security review | /dog:security |
| Verify before claiming done | /dog:verify |
| Situation | Command |
|---|---|
| Branch ready → open PR | /dog:ship |
| Merge → deploy → verify production | /dog:land |
| Generate a CI/CD pipeline for the stack | /dog:ci |
| Situation | Command |
|---|---|
| Disciplined debugging with root-cause focus | /dog:systematic-debugging |
| Hard bug — need causal investigation | /dog:deep-dive |
| Broken build or type errors | /dog:build-fix |
| Competing hypotheses, evidence-driven trace | /dog:trace |
57 commands in total, all documented with examples: docs/commands.md
| Component | Count | What it does | Catalog |
|---|---|---|---|
| Commands | 57 | /dog: slash commands that invoke workflows |
docs/commands.md |
| Skills | 77 | Context-aware instructions loaded on demand | docs/skills.md |
| Agents | 19 | Specialized personas (reviewer, planner, architect, ...) | docs/agents.md |
| Rules | 30 | Permanent, language-aware conventions | docs/rules.md |
| Hooks | 13 | Lifecycle automation — fire without intervention | docs/hooks.md |
| Language | Runtime / Framework | Tests | Build |
|---|---|---|---|
| TypeScript | Bun, Node, Next.js, NestJS | Vitest, Jest | tsc, bun build |
| Python | FastAPI, Django | pytest | pip, uv |
| Go | stdlib, gin, echo | go test | go build |
| Rust | tokio, axum | cargo test | cargo build |
- Plan before code — PRP and
/dog:planare the rule, not the exception. - Review before merge — every change passes through
/dog:reviewbefore the PR. - Playwright-first — any browser automation goes through Playwright; never
fetch,axios, or puppeteer. - Bun by default — JS/TS projects use Bun unless stated otherwise.
- Conventional commits — commit messages follow the conventional commits standard.
- Minimal fix —
/dog:build-fixand type-fix do the minimum; no opportunistic refactors. - No Claude co-author — commits never include
Co-Authored-By: Claude(blocked by a hook).
30 permanent rules that enforce these automatically: docs/rules.md
dog_stack did not come out of nowhere. It distills the best of a growing scene of projects that showed Claude Code can become much more when paired with a well-built harness. Credit where it's due:
- everything-claude-code — broad compilation of Claude Code practices and patterns.
- gstack — opinionated stack focused on professional dev workflows; direct inspiration for the PRP format.
- superpowers — demonstrated how reusable skills turn an ordinary session into senior-level collaboration.
- agency-agents — influenced the architecture of specialized agents with clear responsibilities.
- oh-my-claudecode — reference for packaging a plugin so it feels approachable out of the box.
- claude-code-best-practices — field guides that helped calibrate when to do what.
If dog_stack is useful to you, these are worth exploring too — each tackles the same problem from a different angle.
| Document | What it covers |
|---|---|
| docs/index.md | Index + "when to use what" map |
| docs/installation.md | Detailed installation and troubleshooting |
| docs/commands.md | Full reference for all 57 commands |
| docs/skills.md | Catalog of the 77 skills |
| docs/agents.md | The 19 agents and when to activate each |
| docs/rules.md | Permanent rules and their responsibilities |
| docs/hooks.md | Hooks and their lifecycle triggers |
| docs/workflows.md | 23 end-to-end workflows with diagrams |
| docs/quick-reference.md | Command cheatsheet |
MIT License. See LICENSE for the full text.
Issues, ideas, and contributions are welcome — open an issue on GitHub or submit a PR.
