Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
912 changes: 406 additions & 506 deletions .claude/AGENTS.md

Large diffs are not rendered by default.

377 changes: 309 additions & 68 deletions .claude/agents/docs-vision.md

Large diffs are not rendered by default.

186 changes: 135 additions & 51 deletions .claude/agents/expert.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,19 +19,21 @@ Domain expert for SQLSpec implementation. Handles all technical work: core devel

## Implementation Workflow

Codex or Gemini CLI can emulate this workflow without the `/implement` command. When prompted to “run the implementation phase” for a workspace, either assistant must follow every step below, then continue with the Testing and Docs & Vision sequences described in their respective agent guides. Always read the active workspace in `specs/active/{requirement}/` (or `requirements/{requirement}/` if legacy) before making changes. Claude should rely on `/implement` unless explicitly directed to operate manually.

### Step 1: Read the Plan

Always start by understanding the full scope:

```python
# Read PRD from workspace
Read("requirements/{requirement}/prd.md")
Read("specs/active/{requirement}/prd.md")

# Check tasks list
Read("requirements/{requirement}/tasks.md")
Read("specs/active/{requirement}/tasks.md")

# Review research findings
Read("requirements/{requirement}/research/plan.md")
Read("specs/active/{requirement}/research/plan.md")
```

### Step 2: Research Implementation Details
Expand Down Expand Up @@ -207,26 +209,105 @@ make lint
make fix
```

### Step 6: Update Workspace
### Step 6: Auto-Invoke Testing Agent (MANDATORY)

After implementation is complete, automatically invoke the Testing agent:

```python
# Invoke Testing agent as subagent
Task(
subagent_type="testing",
description="Create comprehensive test suite",
prompt=f"""
Create comprehensive tests for the implemented feature in specs/active/{requirement}.

Requirements:
1. Read specs/active/{requirement}/prd.md for acceptance criteria
2. Read specs/active/{requirement}/recovery.md for implementation details
3. Create unit tests for all new functionality
4. Create integration tests for all affected adapters
5. Test edge cases (empty results, errors, boundaries)
6. Achieve >80% coverage
7. Update specs/active/{requirement}/tasks.md marking test phase complete
8. Update specs/active/{requirement}/recovery.md with test results

All tests must pass before returning control to Expert agent.
"""
)
```

### Step 7: Auto-Invoke Docs & Vision Agent (MANDATORY)

After tests pass, automatically invoke the Docs & Vision agent:

```python
# Invoke Docs & Vision agent as subagent
Task(
subagent_type="docs-vision",
description="Documentation, quality gate, knowledge capture, and archive",
prompt=f"""
Complete the documentation, quality gate, knowledge capture, and archival process for specs/active/{requirement}.

Phase 1 - Documentation:
1. Read specs/active/{requirement}/prd.md for feature details
2. Update project documentation (Sphinx)
3. Create/update guides in docs/guides/
4. Validate code examples work
5. Build documentation without errors

Phase 2 - Quality Gate:
1. Verify all PRD acceptance criteria met
2. Verify all tests passing
3. Check code standards compliance (AGENTS.md)
4. BLOCK if any criteria not met

Phase 3 - Knowledge Capture:
1. Analyze implementation for new patterns
2. Extract best practices and conventions
3. Update AGENTS.md with new patterns
4. Update relevant guides in docs/guides/
5. Document patterns with working examples

Phase 4 - Re-validation:
1. Re-run tests after documentation updates
2. Rebuild documentation to verify no errors
3. Check pattern consistency across project
4. Verify no breaking changes introduced
5. BLOCK if re-validation fails

Phase 5 - Cleanup & Archive:
1. Remove all tmp/ files
2. Move specs/active/{requirement} to specs/archive/
3. Generate completion report

Return comprehensive completion summary when done.
"""
)
```

### Step 8: Update Workspace

Track progress in `requirements/{requirement}/`:
Track progress in `specs/active/{requirement}/`:

```markdown
# In tasks.md, mark completed items:
- [x] 2. Core implementation
- [x] 3. Adapter-specific code
- [ ] 4. Testing ← UPDATE THIS
- [x] 4. Testing (via Testing agent)
- [x] 5. Documentation (via Docs & Vision agent)
- [x] 6. Knowledge Capture (via Docs & Vision agent)
- [x] 7. Archived (via Docs & Vision agent)
```

```markdown
# In recovery.md, update status:
## Current Status
Status: Testing
Last updated: 2025-10-09
Status: Complete - archived
Last updated: 2025-10-19

## Next Steps
- Complete integration tests for asyncpg
- Add test for edge case: empty result set
## Final Summary
Implementation, testing, documentation, and knowledge capture complete.
Spec archived to specs/archive/{requirement}/
```

## Database Adapter Implementation
Expand Down Expand Up @@ -339,45 +420,42 @@ mcp__zen__debug(
# Continue until root cause found...
```

## Handoff to Testing Agent

When implementation complete:

1. **Mark tasks complete:**

```markdown
- [x] 2. Core implementation
- [x] 3. Adapter-specific code
- [ ] 4. Testing ← HAND OFF TO TESTING AGENT
```

2. **Update recovery.md:**
## Automated Workflow

```markdown
## Current Status
Status: Ready for testing
Files modified:
- sqlspec/adapters/asyncpg/driver.py
- sqlspec/core/result.py
The Expert agent orchestrates a complete workflow:

## Next Steps
Testing agent should:
- Add unit tests for new methods
- Add integration tests for asyncpg
- Verify edge cases handled
```

3. **Notify user:**

```
Implementation complete!
```
┌─────────────────────────────────────────────────────────────┐
│ EXPERT AGENT │
│ │
│ 1. Read Plan & Research │
│ 2. Implement Feature │
│ 3. Self-Test & Verify │
│ 4. ──► Auto-Invoke Testing Agent (subagent) │
│ │ │
│ ├─► Create unit tests │
│ ├─► Create integration tests │
│ ├─► Test edge cases │
│ └─► Verify coverage & all tests pass │
│ 5. ──► Auto-Invoke Docs & Vision Agent (subagent) │
│ │ │
│ ├─► Update documentation │
│ ├─► Quality gate validation │
│ ├─► Update AGENTS.md with new patterns │
│ ├─► Update guides with new patterns │
│ ├─► Re-validate (tests, docs, consistency) │
│ ├─► Clean tmp/ and archive │
│ └─► Generate completion report │
│ 6. Return Complete Summary │
└─────────────────────────────────────────────────────────────┘
```

Modified files:
- [sqlspec/adapters/asyncpg/driver.py](sqlspec/adapters/asyncpg/driver.py#L42-L67)
- [sqlspec/core/result.py](sqlspec/core/result.py#L123)
**IMPORTANT**: The Expert agent MUST NOT mark implementation complete until:

Next: Invoke Testing agent to create comprehensive tests.
```
1. Testing agent confirms all tests pass
2. Docs & Vision agent confirms quality gate passed
3. Docs & Vision agent confirms knowledge captured in AGENTS.md and guides
4. Spec is properly archived to specs/archive/

## Tools Available

Expand All @@ -397,7 +475,7 @@ When implementation complete:
# User: "Implement connection pooling for asyncpg"

# 1. Read plan
Read("requirements/asyncpg-pooling/prd.md")
Read("specs/active/asyncpg-pooling/prd.md")

# 2. Research
Read("docs/guides/adapters/postgres.md")
Expand All @@ -413,11 +491,14 @@ Edit(
new_string="pool = await asyncpg.create_pool(**pool_config)"
)

# 4. Test
# 4. Test locally
Bash(command="uv run pytest tests/integration/test_adapters/test_asyncpg/ -v")

# 5. Update workspace
Edit(file_path="requirements/asyncpg-pooling/tasks.md", ...)
# 5. Auto-invoke Testing agent (creates comprehensive tests)
Task(subagent_type="testing", description="Create test suite", prompt=...)

# 6. Auto-invoke Docs & Vision agent (docs, QA, knowledge, archive)
Task(subagent_type="docs-vision", description="Complete workflow", prompt=...)
```

## Success Criteria
Expand All @@ -427,4 +508,7 @@ Edit(file_path="requirements/asyncpg-pooling/tasks.md", ...)
✅ **Tests pass** - `make lint` and `make test` pass
✅ **Performance considered** - SQLglot and mypyc patterns followed
✅ **Workspace updated** - tasks.md and recovery.md current
✅ **Clean handoff** - Next agent (Testing/Docs) can resume easily
✅ **Testing agent invoked** - Tests created and passing
✅ **Docs & Vision invoked** - Documentation, quality gate, knowledge capture, and archive complete
✅ **Spec archived** - Moved to specs/archive/
✅ **Knowledge captured** - AGENTS.md and guides updated with new patterns
14 changes: 8 additions & 6 deletions .claude/agents/planner.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,12 @@ Strategic planning agent for SQLSpec development. Creates research-grounded, mul
1. **Research-Grounded Planning** - Consult guides, docs, and best practices before planning
2. **Multi-Session Planning** - Use zen planner for structured, resumable plans
3. **Consensus Verification** - Get multi-model agreement on complex decisions
4. **Session Continuity** - Produce detailed artifacts in `requirements/` workspace
4. **Session Continuity** - Produce detailed artifacts in `specs/active/` workspace

## Planning Workflow

Codex or Gemini CLI can mirror this workflow without using `/plan`. When either assistant is asked to “plan {feature}”, it must follow every step below, create or update the workspace at `specs/active/{requirement}/` (fallback `requirements/{requirement}/`), and generate the same artifacts the Planner agent would produce. Claude should continue to rely on the `/plan` command unless instructed otherwise.

### Step 1: Understand Requirements

```python
Expand Down Expand Up @@ -111,10 +113,10 @@ mcp__zen__consensus(

### Step 5: Create Workspace Artifacts

Create requirement folder in `requirements/`:
Create requirement folder in `specs/active/`:

```bash
mkdir -p requirements/{requirement-slug}/{research,tmp}
mkdir -p specs/active/{requirement-slug}/{research,tmp}
```

**Required files:**
Expand Down Expand Up @@ -230,14 +232,14 @@ After planning complete:
1. **Verify workspace created**:

```bash
ls -la requirements/{requirement-slug}/
ls -la specs/active/{requirement-slug}/
# Should show: prd.md, tasks.md, research/, tmp/, recovery.md
```

2. **Notify user**:

```
Planning complete! Workspace created at `requirements/{requirement-slug}/`.
Planning complete! Workspace created at `specs/active/{requirement-slug}/`.

Next: Invoke Expert agent to begin implementation.
```
Expand Down Expand Up @@ -290,6 +292,6 @@ mcp__zen__planner(
✅ **Research complete** - All relevant guides consulted
✅ **Plan structured** - Zen planner workflow used
✅ **Decisions verified** - Consensus on complex choices
✅ **Workspace created** - `requirements/{requirement}/` fully populated
✅ **Workspace created** - `specs/active/{requirement}/` fully populated
✅ **Resumable** - recovery.md enables session continuity
✅ **Standards followed** - CLAUDE.md patterns enforced
31 changes: 19 additions & 12 deletions .claude/agents/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,16 @@ Comprehensive testing specialist for SQLSpec. Creates pytest-based unit and inte

## Testing Workflow

Codex or Gemini CLI can execute this workflow directly. When prompted to “perform the testing phase” for a workspace, either assistant must read the existing plan, follow every step below, and produce the same artifacts and coverage validation that the Testing agent would return. Claude should continue to use `/test` unless instructed otherwise.

### Step 1: Read Implementation

Understand what needs testing:

```python
# Read workspace
Read("requirements/{requirement}/prd.md")
Read("requirements/{requirement}/tasks.md")
Read("specs/active/{requirement}/prd.md")
Read("specs/active/{requirement}/tasks.md")

# Read implementation
Read("sqlspec/adapters/asyncpg/driver.py")
Expand Down Expand Up @@ -290,7 +292,7 @@ uv run pytest -n 2 --dist=loadgroup
**Mark testing complete:**

```markdown
# In requirements/{requirement}/tasks.md:
# In specs/active/{requirement}/tasks.md:
- [x] 3. Adapter-specific code
- [x] 4. Testing ← JUST COMPLETED
- [ ] 5. Documentation ← HAND OFF TO DOCS & VISION
Expand All @@ -308,10 +310,12 @@ Tests added:
All tests passing ✅

## Next Steps
Documentation agent should:
Docs & Vision agent (auto-invoked by Expert) should:
- Update adapter documentation
- Add usage examples
- Update API reference
- Run quality gate
- Capture new testing patterns in AGENTS.md
- Update docs/guides/ with patterns
- Re-validate and archive
```

## Test Organization
Expand Down Expand Up @@ -404,7 +408,7 @@ open htmlcov/index.html
assert result is not None
```

## Handoff to Docs & Vision
## Return to Expert Agent

When testing complete:

Expand All @@ -418,25 +422,28 @@ When testing complete:
2. **Update workspace:**

```markdown
# In specs/active/{requirement}/tasks.md:
- [x] 4. Testing
- [ ] 5. Documentation ← HAND OFF TO DOCS & VISION
- [ ] 5. Documentation ← EXPERT WILL AUTO-INVOKE DOCS & VISION
```

3. **Notify user:**
3. **Return to Expert with summary:**

```
Testing complete! ✅

Tests added:
- [tests/integration/test_adapters/test_asyncpg/test_connection.py](tests/integration/test_adapters/test_asyncpg/test_connection.py)
- [tests/unit/test_core/test_statement.py](tests/unit/test_core/test_statement.py)
- [tests/integration/test_adapters/test_asyncpg/test_connection.py](tests/integration/test_adapters/test_asyncpg/test_connection.py) (5 tests)
- [tests/unit/test_core/test_statement.py](tests/unit/test_core/test_statement.py) (3 tests)

Coverage: 87% (target: 80%+)
All tests passing: ✅

Next: Invoke Docs & Vision agent for documentation and final review.
Expert agent will now auto-invoke Docs & Vision for documentation, quality gate, knowledge capture, and archival.
```

**Note**: This agent is typically invoked automatically by the Expert agent. It returns control to Expert, which then auto-invokes Docs & Vision agent.

## Tools Available

- **Context7** - Library documentation (pytest, pytest-asyncio, pytest-databases)
Expand Down
Loading