Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
97 changes: 97 additions & 0 deletions formats/implementation-plan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
<!-- SPDX-License-Identifier: MIT -->
<!-- Copyright (c) PromptKit Contributors -->

---
name: implementation-plan
type: format
description: >
Output format for implementation and refactoring plans. Defines
section structure for task breakdown, dependency ordering, risk
assessment, and verification strategy.
produces: implementation-plan
---

# Format: Implementation Plan

The output MUST be a structured implementation plan with the following
sections in this exact order. Do not omit sections — if a section has no
content, state "None identified" with a brief justification.

## Document Structure

```markdown
# <Plan Title> — Implementation Plan

## 1. Overview
<1–3 paragraphs: what is being implemented or refactored, why,
and what the end state looks like. Include the goal, scope, and
any driving requirements or design documents.>

## 2. Current State
<Description of the starting point:
- What exists today (code, infrastructure, processes)
- What works and what doesn't
- Key assumptions about the current state

For greenfield projects, state "Greenfield — no existing implementation."
For refactoring, provide a behavioral summary of the current code.>

## 3. Prerequisites
<What must be true before work begins:
- Required documents (requirements, design)
- Environment setup
- Dependencies on other teams or systems
- Decisions that must be made first>

## 4. Plan

### Phase <N>: <Phase Name>

#### TASK-<NNN>: <Task Title>
- **Description**: <what to implement or change>
- **Requirements**: <REQ-IDs addressed, if available>
- **Dependencies**: <TASK-IDs that must complete first, or "None">
- **Acceptance Criteria**: <how to verify completion>
- **Complexity**: Small / Medium / Large
- **Risks**: <what could go wrong with this task>
- **Verification**: <how to confirm correctness after this task>
- **Rollback**: <how to undo this change if needed>

<Repeat for each task. Group tasks into phases representing
logical milestones or deliverables.>

## 5. Dependency Graph
<Text-based diagram (Mermaid, ASCII, or structured list) showing
task dependencies and the critical path. Identify which sequence
of dependent tasks determines the minimum time to completion.>

## 6. Risk Assessment
| Risk ID | Description | Likelihood | Impact | Mitigation |
|---------|-------------|-----------|--------|------------|
| RISK-001 | ... | High/Med/Low | High/Med/Low | ... |

## 7. Verification Strategy
<How to confirm the plan is complete and correct:
- What tests should pass at each phase boundary
- Integration or end-to-end verification approach
- How to validate the final state matches the target>

## 8. Open Questions
<Decisions that need to be made before or during implementation.
For each: what is unknown, why it matters, and who can resolve it.>

## 9. Revision History
<Table: | Version | Date | Author | Changes |>
```

## Formatting Rules

- Tasks MUST be ordered by dependency, not by perceived importance.
- Every task MUST have acceptance criteria (how to know it is done).
- Every task MUST have a complexity estimate (Small / Medium / Large).
- The critical path MUST be identified in the dependency graph.
- Tasks MUST use stable identifiers: `TASK-<NNN>` with sequential numbering.
- Cross-references between tasks use the task ID
(e.g., "depends on TASK-003").
- Phases represent logical milestones — each phase should be
independently demonstrable or deployable where possible.
7 changes: 7 additions & 0 deletions templates/author-design-doc.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,3 +73,10 @@ requirements specified below.
- [ ] Security considerations section is populated
- [ ] Open questions are listed, not silently resolved
- [ ] No fabricated details — all unknowns marked with [UNKNOWN]

## Non-Goals

- Do NOT generate requirements — consume them as input.
- Do NOT implement the design — this is a specification document.
- Do NOT make technology choices without stating them as open
questions when the requirements do not mandate a specific choice.
8 changes: 8 additions & 0 deletions templates/author-requirements-doc.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,3 +69,11 @@ project or feature.
- [ ] Out-of-scope section is populated
- [ ] Assumptions are explicitly listed
- [ ] No fabricated details — all unknowns marked with [UNKNOWN]

## Non-Goals

- Do NOT produce a design document — focus on requirements only.
- Do NOT specify implementation approach or technology choices.
- Do NOT generate test cases — those belong in a validation plan.
- Do NOT resolve ambiguities silently — flag them in the
Pre-Authoring Analysis section for stakeholder review.
7 changes: 7 additions & 0 deletions templates/author-validation-plan.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,3 +75,10 @@ requirements are tested and verifiable.
- [ ] Every test case has a measurable expected result
- [ ] Coverage gaps are explicitly flagged
- [ ] Pass/fail criteria are defined at both test and aggregate level

## Non-Goals

- Do NOT implement or execute the tests — produce the plan only.
- Do NOT generate requirements — consume them as input.
- Do NOT test implementation details that are not tied to requirements.
- Do NOT expand scope beyond the provided requirements document.
12 changes: 12 additions & 0 deletions templates/investigate-bug.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,3 +109,15 @@ tailored to this specific investigation. The plan should:
5. **Report**: Produce the output according to the specified format.

This plan replaces ad-hoc exploration with systematic analysis.

## Quality Checklist

Before finalizing, verify:

- [ ] Every finding cites specific code evidence (file, line, function)
- [ ] Every finding has a severity rating with justification
- [ ] Root cause is identified, not just the proximate trigger
- [ ] Remediation recommendations are specific and actionable
- [ ] At least 3 findings have been re-verified against the source
- [ ] Coverage statement documents what was and was not examined
- [ ] No fabricated code paths or behaviors — unknowns marked with [UNKNOWN]
13 changes: 13 additions & 0 deletions templates/investigate-security.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,3 +108,16 @@ Before beginning analysis, produce a concrete step-by-step plan:
to each attack surface element.
4. **Rank**: Order findings by exploitability and impact.
5. **Report**: Produce the output according to the specified format.

## Quality Checklist

Before finalizing, verify:

- [ ] Every finding cites specific code evidence (file, line, function)
- [ ] Every finding has a severity rating with justification
- [ ] Confirmed vulnerabilities have concrete exploit scenarios
- [ ] Every finding rated High or Critical includes an attack scenario
- [ ] CWE identifiers are included where applicable
- [ ] At least 3 findings have been re-verified against the source
- [ ] Coverage statement documents what was and was not examined
- [ ] No fabricated vulnerabilities — unknowns marked with [UNKNOWN]
21 changes: 21 additions & 0 deletions templates/plan-implementation.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,3 +107,24 @@ down a project into actionable, ordered tasks.

6. **Flag risky tasks**: tasks with high uncertainty, external
dependencies, or novel technology that could cause delays.

## Non-Goals

- Do NOT implement any tasks — produce the plan only.
- Do NOT generate requirements or design — consume them as inputs.
- Do NOT estimate calendar time or assign tasks to specific people.
- Do NOT recommend technology choices unless directly relevant to
task decomposition.

## Quality Checklist

Before finalizing, verify:

- [ ] Every task has a unique TASK-ID
- [ ] Every task has acceptance criteria
- [ ] Every task has a complexity estimate (Small/Medium/Large)
- [ ] Dependencies between tasks are explicit (no implicit ordering)
- [ ] The critical path is identified
- [ ] Risk assessment covers at least the top 3 risks
- [ ] Requirements traceability is present (REQ-IDs mapped to tasks)
- [ ] No fabricated requirements — unknowns marked with [UNKNOWN]
20 changes: 20 additions & 0 deletions templates/plan-refactoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,3 +101,23 @@ existing code safely and incrementally.
6. **Prefer small, safe steps** over large, risky ones.
The ideal refactoring step changes structure without changing behavior
(or changes behavior without changing structure), never both at once.

## Non-Goals

- Do NOT perform the refactoring — produce the plan only.
- Do NOT redesign the architecture — focus on incremental improvement.
- Do NOT add new features as part of the refactoring plan.
- Do NOT assume callers, tests, or dependencies not shown in the
provided code.

## Quality Checklist

Before finalizing, verify:

- [ ] Every step is a self-contained, committable change
- [ ] Every step maintains existing behavior (unless explicitly stated)
- [ ] Every step has a verification method
- [ ] Every step has a rollback path
- [ ] Risks and mitigations are documented
- [ ] Current state analysis matches the provided code
- [ ] No fabricated code paths or behaviors — unknowns marked with [UNKNOWN]
12 changes: 12 additions & 0 deletions templates/review-code.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,3 +103,15 @@ following code.
directly called by or calls into the reviewed code.
- Do NOT comment on personal style preferences — focus on
correctness, safety, security, and maintainability.

## Quality Checklist

Before finalizing, verify:

- [ ] Every finding cites a specific code location
- [ ] Every finding has a severity rating (Critical/High/Medium/Low/Nit)
- [ ] Every finding includes a concrete fix suggestion
- [ ] Findings are ordered by severity
- [ ] At least 3 findings have been re-verified against the source
- [ ] Overall assessment (approve / approve with changes / request changes) is stated
- [ ] Top 3 most important items are identified in the summary
Loading