feat(strategies): convert rapid, bdd, discuss to vBRIEF-centric outputs (#363, #365, #366)#377
Conversation
Greptile SummaryConverts Confidence Score: 5/5Safe to merge — all findings are P2 test-quality suggestions with no correctness or data-integrity impact. All three strategy documents are consistently updated to vBRIEF-centric outputs, cross-references within the files are correct, and the 12 new content tests pass against the current state. The two flagged items are both P2: a broad string assertion in one test and class-level eager file reads that would obscure errors at collection time. Neither blocks merge. tests/content/test_strategy_outputs.py — minor test quality improvements suggested (assertion specificity and import-time file reads). Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
subgraph rapid ["rapid strategy"]
R1["Step 1: State Goal\n→ vbrief/specification.vbrief.json"]
R2["Step 2: Minimal Interview"]
R3["Step 3: Generate specification.vbrief.json\n(draft status)"]
R_RENDER["task spec:render\n→ SPECIFICATION.md (read-only export)"]
R4["Step 4: Build"]
R5["Step 5: Evaluate"]
R1 --> R2 --> R3 --> R_RENDER --> R4 --> R5
end
subgraph bdd ["bdd strategy"]
B1["Step 1: Identify Scenarios\n→ tests/ (standard dir)"]
B2["Step 2: Write Failing Tests\n→ tests/"]
B3["Step 3: Run Tests — Surface Ambiguity"]
B4["Step 4: Lock Decisions\n→ vbrief/proposed/{feature}-bdd.vbrief.json\n Scenarios + LockedDecisions narratives"]
B5["Step 5: Generate Spec"]
B6["Step 6: Chain → interview.md"]
B1 --> B2 --> B3 --> B4 --> B5 --> B6
end
subgraph discuss ["discuss strategy"]
D1["Open / Explore / Challenge Vagueness"]
D2["Lock Decisions\n→ vbrief/proposed/{scope}-context.vbrief.json\n LockedDecisions narrative (MUST)"]
D3["Verify (Feynman check)"]
D4["Chain → interview.md Chaining Gate"]
D1 --> D2 --> D3 --> D4
end
B6 --> interview["interview.md Chaining Gate"]
D4 --> interview
R5 -->|"graduate"| interview
style R_RENDER fill:#fef08a,stroke:#ca8a04,color:#000
style B4 fill:#6ee7b7,stroke:#059669,color:#000
style D2 fill:#6ee7b7,stroke:#059669,color:#000
style R3 fill:#6ee7b7,stroke:#059669,color:#000
Prompt To Fix All With AIThis is a comment left during a code review.
Path: tests/content/test_strategy_outputs.py
Line: 102-105
Comment:
**Overly broad assertion — passes on section headings, not the narrative key**
`assert "Scenarios" in self._text` matches the word wherever it appears (e.g. `"### Step 1: Identify User Scenarios"`), so the test would still pass even if the backtick-wrapped `` `Scenarios` `` vBRIEF narrative key was removed. A tighter check on the narrative definition prevents a false-passing regression.
```suggestion
def test_scenarios_narrative(self) -> None:
assert "`Scenarios`" in self._text, (
"strategies/bdd.md must reference Scenarios narrative in vBRIEF"
)
```
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: tests/content/test_strategy_outputs.py
Line: 31-34
Comment:
**Class-level reads fail at collection time, not test time**
`_text = _read("strategies/rapid.md")` (and the equivalent in the other two classes) runs at module import. If any strategy file is missing or renamed, pytest raises a `FileNotFoundError` during collection, reporting a single import error rather than three individual test failures — making it harder to tell which file caused the problem.
Consider deferring the read to a `@classmethod` setup or a module-level fixture:
```python
@pytest.fixture(scope="class", autouse=True)
def text(self, request):
request.cls._text = _read("strategies/rapid.md")
```
Or keep it simple with a lazy property — the key is that a missing file surfaces as a named test failure rather than an opaque collection error.
How can I resolve this? If you propose a fix, please make it concise.Reviews (3): Last reviewed commit: "fix: address Greptile review findings (b..." | Re-trigger Greptile |
…ts (#363, #365, #366) - rapid.md Step 3: write vbrief/specification.vbrief.json (Light path, draft status) instead of hand-authoring SPECIFICATION.md; run task spec:render for read-only export; update output artifacts to list vBRIEF as primary - bdd.md Step 4: write locked decisions to vbrief/proposed/{feature}-bdd.vbrief.json with Scenarios and LockedDecisions narratives; Steps 1/2: direct test files to project test directory not specs/; eliminate {feature}-bdd-context.md and specs/ folder as BDD outputs - discuss.md: output to vbrief/proposed/{scope}-context.vbrief.json with LockedDecisions narrative; eliminate {scope}-context.md; promote SHOULD to MUST for vBRIEF persistence; update chaining gate artifact registration - Add 12 content tests in tests/content/test_strategy_outputs.py covering all three strategy vBRIEF output requirements
- rapid.md Step 1: fix contradiction where goal was recorded in SPECIFICATION.md directly, now references vbrief/specification.vbrief.json instead (P1) - Add test_step1_does_not_direct_write_to_specification_md to catch Step 1 vs Step 3 contradiction regression - MCP unavailable in this session -- used gh api fallback for review comments
37bf197 to
3c1f709
Compare
|
Rebase-only force-push — no logic changes, rebased onto updated phase2/vbrief-cutover after PR #376 merge. |
Summary
Convert rapid, bdd, and discuss strategies to vBRIEF-centric outputs, eliminating hand-authored markdown artifacts in favor of structured vBRIEF JSON files.
Closes #363, Closes #365, Closes #366
Parent: #338
Changes
rapid.md (#363)
bdd.md (#365)
discuss.md (#366)
Tests
Checklist