Skip to content

mwardrop/CursorControlFlow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

CursorControlFlow

An opinionated, gate-driven AI workflow for Cursor — plan with confidence, execute in parallel, test intelligently.

CursorControlFlow is a set of Cursor skills and slash commands that give you a structured, reviewable workflow for building features with AI. Instead of one long conversation that drifts, you get a gated planning phase, parallel execution waves with automatic code review, and a test runner that adapts to your project's stack.

Design principles

Why gates? The three gates in /create-plan exist because AI-assisted planning fails in predictable ways: assumptions are made silently, requirements are misunderstood early, and solutions are proposed before the problem is understood. Gate A forces clarification. Gate B forces comparison of at least three approaches. Gate C ensures reviewer feedback reaches the user, not just the planning agent.

Why waves? Tasks that don't depend on each other should run in parallel. Wave execution builds a dependency graph and launches independent tasks simultaneously — cutting total execution time on any plan with parallelizable work.

Why a reviewer subagent? A second independent AI reviewing the plan catches things the planning agent missed: vague steps, missing validations, assumptions not surfaced. Using a separate subagent with readonly access prevents the planning agent from reviewing its own work.

Why conditional tests? Forcing TDD on a project with no test structure creates noise, not value. The workflow detects whether a test structure exists before writing tests. If none is found, it falls back to build checks and linting.

Skills

create-plan

File: .cursor/skills/create-plan/SKILL.md

The planning orchestrator. Guides the AI through three phases before any implementation begins:

  1. Clarification — asks questions and waits for answers before designing anything
  2. Approaches — presents 3 or more distinct options with tradeoffs for the user to choose from
  3. Task decomposition — breaks the chosen approach into dependency-ordered, parallelizable tasks with explicit steps, contracts, and runnable validations

Each plan draft goes through a plan-reviewer subagent before being written to disk. The user must explicitly approve the plan file before execution can begin.

execute-plan

File: .cursor/skills/execute-plan/SKILL.md

The execution orchestrator. Reads an approved plan from .cursor/plans/, builds a dependency graph, and runs tasks in parallel waves:

  • Each ready task runs as an independent Task subagent
  • After each wave, a wave-code-reviewer subagent reviews the implementation
  • Failed validations trigger a fix loop (up to 10 rounds per task)
  • After all tasks pass, test-plan runs as the final step

test-plan

File: .cursor/skills/test-plan/SKILL.md

The test runner. Discovers your project's test stack at runtime — no configuration needed:

  • Detects Node/npm, .NET, Python/pytest, Go, Playwright, and other frameworks by inspecting config files and directories
  • Runs all discovered suites; auto-fixes failures up to 3 rounds per suite
  • In plan-aware mode, checks that the plan's manual validation scenarios are covered by the tests that ran
  • Falls back to build checks and linting when no test suite is found

plan-reviewer

File: .cursor/skills/plan-reviewer/SKILL.md

A read-only reviewer subagent spawned by create-plan after each draft. Scores the plan on eight criteria (requirement coverage, assumption hygiene, step explicitness, validation runnability, and others) and returns blocking issues and numbered change requests. Never edits files — returns a review artifact only.

wave-code-reviewer

File: .cursor/skills/wave-code-reviewer/SKILL.md

A read-only reviewer subagent spawned by execute-plan after each wave completes. Reviews the executor's implementation against the task's Steps, Contracts, and Validation bullets. Returns REQUIRED and SUGGESTED change requests. Capped at 3 reviewer-executor rounds per wave.

setup-project

File: .cursor/skills/setup-project/SKILL.md

Captures project-specific context and writes it to .cursor/project.md. All CursorControlFlow skills read this file before acting — eliminating redundant discovery and ensuring skills follow the project's established patterns, test commands, and conventions.

The skill walks through four sections sequentially, auto-discovering what it can before asking for confirmation:

  1. Structure — package manager, monorepo layout, key directories, file naming, generated file locations
  2. Run and debug — dev server, build, ports, environment setup, post-edit side effects
  3. Tests — unit / integration / E2E commands, prerequisites, single-file run syntax, CI behavior, known-skipped tests
  4. Patterns — active patterns new code must follow, in-progress transitions (with executor directive), deprecated patterns to avoid

If .cursor/project.md already exists, the skill asks which section(s) to update and leaves the rest intact.

Workflow

Full end-to-end flow

flowchart TD
    START([User describes a feature or task]) --> B

    subgraph PLAN [create-plan]
        B[Gate A - Clarification questions] --> C[User answers]
        C --> D[Gate B - 3 or more approaches]
        D --> E[User selects approach]
        E --> F[Task decomposition\nsteps and contracts and validations]
        F --> G[plan-reviewer scores draft]
        G --> H{Score >= 9?}
        H -- No REVISE --> F
        H -- Yes PASS --> I[Plan saved to .cursor/plans/]
        I --> J[User approves plan file]
    end

    J --> K

    subgraph EXEC [execute-plan]
        K[Parse tasks and build dependency graph] --> L
        L[Wave - parallel task subagents] --> M[wave-code-reviewer]
        M --> N{PASS?}
        N -- REVISE --> L
        N -- PASS --> O{More waves?}
        O -- Yes --> L
        O -- No --> P[Integration validation]
        P --> Q[test-plan subagent]
    end

    subgraph TEST [test-plan]
        Q --> R[Discover test stack]
        R --> S[Run all suites]
        S --> T{All pass?}
        T -- Fail rounds left --> U[Fix and retry]
        U --> S
        T -- Pass or max rounds --> V[Summary report]
    end

    V --> DONE([Done])
Loading

create-plan detail

flowchart TD
    A([create-plan invoked]) --> B[Clarifying questions]
    B --> C[User answers or waives with assumptions]
    C --> D[Read-only codebase discovery]
    D --> E[Present 3 or more approaches with tradeoffs]
    E --> F[User picks approach]
    F --> G[Follow-up approach-specific and implementation questions]
    G --> H[Summarize understanding - user confirms]
    H --> I[Decompose into tasks with IDs steps contracts validations]
    I --> J[Spawn plan-reviewer subagent]
    J --> K{Gate C - missed user questions?}
    K -- Yes --> L[Ask user and incorporate answers]
    L --> I
    K -- No --> M{Score >= 9?}
    M -- No REVISE --> I
    M -- Yes PASS --> N[Write plan file with PENDING approval block]
    N --> O[User reviews and approves]
    O --> P[Update approval block in plan file]
    P --> Q[Call CreatePlan tool and sync file]
    Q --> R([Plan ready for execute-plan])
Loading

execute-plan wave detail

flowchart TD
    A([Plan loaded and approved]) --> B[Parse task dependency graph]
    B --> C[Compute ready set - tasks with all deps PASS]
    C --> D[Spawn one Task subagent per ready task in parallel]
    D --> E[Subagent implement and write tests if applicable and run validations]
    E --> F{All validations pass?}
    F -- No rounds left --> G[Fix and re-run]
    G --> F
    F -- Yes PASS --> H[Collect wave results]
    F -- Max rounds FAIL --> H
    H --> I[Spawn wave-code-reviewer]
    I --> J{PASS?}
    J -- REVISE --> D
    J -- PASS --> K{More tasks ready?}
    K -- Yes --> C
    K -- No all done --> L[Integration validation]
    L --> M[Spawn test-plan subagent]
    M --> N([Execution complete])
Loading

Installation

Copy the .cursor/ folder into your project root:

your-project/
└── .cursor/
    ├── commands/
    │   ├── create-plan.md
    │   ├── execute-plan.md
    │   ├── setup-project.md
    │   └── test-plan.md
    └── skills/
        ├── create-plan/
        │   ├── SKILL.md
        │   └── reference.md
        ├── execute-plan/
        │   ├── SKILL.md
        │   └── reference.md
        ├── plan-reviewer/
        │   └── SKILL.md
        ├── test-plan/
        │   └── SKILL.md
        ├── wave-code-reviewer/
        │   └── SKILL.md
        └── setup-project/
            ├── SKILL.md
            └── schema.md

Cursor automatically discovers skills and commands in .cursor/ — no additional configuration required.

Requirement: Cursor agent mode must be available in your Cursor version. Skills and slash commands require agent-capable sessions.

Commands

/create-plan

Start a new planning session. Invoke it with a description of what you want to build:

/create-plan I need to add user authentication with JWT tokens

The workflow asks clarifying questions, presents approaches, decomposes tasks, and runs a reviewer before saving the plan.

/execute-plan

Execute an approved plan file:

/execute-plan .cursor/plans/add-auth.md

The command shows the plan title and first task, asks for confirmation, then begins wave execution.

/test-plan

Run the full test suite (discovers your stack automatically):

/test-plan

Or provide a plan file to also check plan scenario coverage:

/test-plan .cursor/plans/add-auth.md

/setup-project

Capture project context for skills to use:

/setup-project

Walks through four sections (Structure, Run and debug, Tests, Patterns), auto-discovers what it can, then confirms and fills gaps. Writes .cursor/project.md. Re-running asks which section(s) to update.

Configuring for your stack

Automatic stack detection

test-plan discovers your stack at runtime — no configuration file needed. Detection signals:

Signal Framework detected
package.json with test script Node.js (Jest, Vitest, Mocha, and others)
*.sln or *.csproj with xUnit/NUnit/MSTest .NET
pytest.ini or pyproject.toml [tool.pytest] Python / pytest
go.mod with *_test.go files Go
playwright.config.* Playwright E2E
Makefile with test target Make-based projects

If your project uses a non-standard command, describe the validation steps in your plan's final task (T-FINAL). test-plan reads those steps in plan-aware mode and uses them as its guide.

Project context file (recommended)

Run /setup-project once after installing CursorControlFlow to capture your project's context in .cursor/project.md. All skills read this file before acting — it eliminates redundant discovery, keeps executors on the right patterns, and gives reviewers concrete rules to enforce.

Section What skills use it for
## Patterns — Active / Transitioning / Deprecated Executors follow active patterns; reviewers flag violations; planners aware of in-progress refactors
## Run and debug — commands, ports, Post-edit side effects Executors run correct commands and apply required post-edit steps
## Tests — unit / integration / E2E commands, known skipped test-plan uses declared commands; skips known-broken tests
## Structure — directories, file naming, generated files Planners focus searches; executors place new files correctly

All sections are optional. Skills fall back to runtime discovery for any missing section and note the fallback in their output.

Plans directory

Approved plans are saved to .cursor/plans/. Keep them in version control — they document intent and the approval record.

.cursor/plans/
├── add-auth.md
├── refactor-api.md
└── ...

License

MIT

About

An opinionated workflow for Cursor

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors