Skip to content

stevenfitz008/ai-development-framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

AI Development Framework Guide

A comprehensive guide to AI-driven software development using Claude Code, agents, skills, and the PIV (Prime, Implement, Validate) methodology.

This framework demonstrates how to leverage AI agents for planning, implementing, testing, and validating software projects—transforming how you build applications from requirements to deployment.


Table of Contents

📘 CLAUDE.md - Complete Framework Guide (Start Here!)

  1. Overview
  2. Quick Start
  3. Framework Hierarchy
  4. Framework Components
  5. The PIV Loop
  6. Getting Started
  7. Project Workflow
  8. Skills (Commands)
  9. DevSecOps
  10. Agents & Subagents
  11. Configuration Settings
  12. Reference Documentation
  13. MCP Servers
  14. Testing Strategy
  15. Best Practices
  16. Example Project

Overview

This AI Development Framework enables you to:

  • Plan comprehensively with AI-assisted feature design and architecture
  • Implement systematically using step-by-step execution plans
  • Validate rigorously with automated testing, linting, and code review
  • Secure proactively with integrated DevSecOps scanning and compliance checks
  • Test thoroughly with AI-driven UI/UX testing using agent-browser
  • Iterate intelligently with root cause analysis and systematic fixes
  • Document automatically with brownfield codebase analysis and PRD generation

Core Philosophy

Human + AI Collaboration: You provide direction and review; AI handles implementation details, testing, validation, and security. The result is faster development with higher quality and built-in security.

Framework Highlights

  • 23 Skills across 6 categories (PIV Loop, Validation, Bug Fixing, Testing, DevSecOps, Utilities)
  • Brownfield Support - Reverse-engineer documentation from existing codebases
  • DevSecOps Suite - SAST, dependency scanning, secrets detection, container scanning, compliance
  • OpenShift Ready - Production deployment with comprehensive configs
  • GitLab Integration - MCP server for issues, MRs, CI/CD
  • UI/UX Review - Accessibility and design pattern validation
  • Agent-Browser - AI-driven UI testing (no Selenium/Playwright code)
  • Comprehensive Settings - 604 lines of configuration for complete customization

Quick Start

New Users: Start Here! 🚀

📘 Primary Guide: CLAUDE.md

Complete guide to using Claude Code with this framework:

  • Framework overview & architecture
  • Quick start guide (5 minutes)
  • All 23 skills documented
  • 5 complete workflows
  • Technology stack reference
  • Best practices & performance metrics
  • Troubleshooting & tips

📅 Latest: Claude Code Best Practices (January 2026)

🆕 .claude/docs/CLAUDE_BEST_PRACTICES_2026.md

Current best practices as of January 29, 2026:

  • What's new in 2026 (Sonnet 4.5, Opus 4.5, brownfield analysis)
  • Modern workflows (85-92% faster development)
  • Security-first approach (95% of vulnerabilities caught pre-commit)
  • Model selection guide (40-60% cost savings)
  • Migration guide from 2024-2025 practices

📖 Complete Commands Reference

.claude/docs/CLAUDE_COMMANDS.md

This document includes:

  • All 23 framework skills with examples
  • Common workflows (new feature, bug fix, deployment)
  • Tips & tricks for efficient development
  • Troubleshooting guide
  • MCP server setup
  • Quick reference card

Common Commands Cheat Sheet

# 🧠 Load project context
/core_piv_loop:prime

# 📋 Plan a new feature
/core_piv_loop:plan-feature "Add email notifications"

# ⚙️ Execute the plan
/core_piv_loop:execute

# ✅ Validate code
/validation:validate

# 🔒 Security scan
/devsecops:sast

# 📦 Commit changes
/commit

# 🏗️ Analyze existing codebase (brownfield)
/brownfield:analyze

See CLAUDE_COMMANDS.md for detailed documentation of all skills.


What is PIV?

PIV stands for Prime → Implement → Validate - the core methodology for systematic software development with AI assistance.

PIV Loop Diagram

┌─────────────────────────────────────────┐
│                                         │
│  ┌────────┐    ┌───────────┐    ┌──────────┐
│  │ PRIME  │ -> │ IMPLEMENT │ -> │ VALIDATE │
│  └────────┘    └───────────┘    └──────────┘
│       ^                               |
│       └───────────────────────────────┘
│            (Loop/Iterate)
└─────────────────────────────────────────┘

1. Prime 🧠

Load context and understand what you're building

  • Read project structure and conventions
  • Load reference documentation
  • Understand tech stack and patterns
  • Identify critical files and dependencies

Command: /core_piv_loop:prime

What Prime Does:

  1. Detects project state (new, existing, or primed)
  2. Creates documentation (PRD, CLAUDE.md if missing)
  3. Analyzes structure using git and file system commands
  4. Reads core files:
    • CLAUDE.md (project instructions)
    • .claude/docs/PRD.md (requirements)
    • README.md (overview)
    • Reference docs in .claude/reference/ (backend/, frontend/, devops/, gitlab/, testing/)
  5. Identifies key files (entry points, configs, models)
  6. Checks current state (git status, recent commits, existing plans)
  7. Recommends next steps

Tools Used (No separate agents spawned):

  • Bash: Git commands, file listing, directory tree
  • Read: Documentation and source files
  • Glob: Pattern-based file discovery
  • Executed directly by Claude (not via subagents)

Example:

/core_piv_loop:prime
# Agent: Analyzing project structure...
#        ✓ Found CLAUDE.md and PRD.md
#        ✓ Read 5 reference documents
#        ✓ Analyzed 42 source files
#        ✓ Reviewed recent commits
#        Context loaded ✓
#        Ready to work on: Template Project
#
#        Next steps:
#        1. Continue implementing auth feature from .claude/plans/active/auth.md
#        2. Or run /plan-feature for new feature

2. Implement ⚙️

Execute plans systematically, step by step

Has two sub-phases:

  • Plan: Design the implementation approach (before coding)
  • Execute: Implement the plan step-by-step

Commands:

  • /core_piv_loop:plan-feature "Feature description"
  • /core_piv_loop:execute

2a. Plan Sub-Phase

What Plan Does:

  1. Understands the feature (problem, value, complexity)
  2. Gathers codebase intelligence:
    • Analyzes project structure and patterns
    • Searches for similar implementations
    • Identifies dependencies and integration points
    • Reviews testing patterns
  3. Researches externally (documentation, best practices)
  4. Designs implementation (step-by-step plan)
  5. Creates validation strategy (tests, checks)
  6. Writes plan document to .claude/plans/
  7. Requests your approval before execution

Agents Used:

  • Plan Agent (main): Read-only analysis, no code changes
  • Explore Agent (spawned): Fast codebase pattern searches
  • General-Purpose Agent (spawned): External research, documentation
  • Can spawn multiple agents in parallel for comprehensive analysis

2b. Execute Sub-Phase

What Execute Does:

  1. Reads approved plan from .claude/plans/
  2. Implements step-by-step:
    • Creates/modifies files per plan
    • Follows documented patterns
    • Adds tests as specified
  3. Validates each step before proceeding
  4. Handles errors and rollbacks if needed
  5. Creates git commits for logical chunks

Agents Used:

  • General-Purpose Agent (main): Full toolkit access (Read, Write, Edit)
  • Bash Agent (spawned): Running tests, git operations
  • Executes sequentially, validating each step

Example:

# Plan first (spawns multiple agents for analysis)
/core_piv_loop:plan-feature "Add user authentication"
# Agent: Spawning Explore agent for pattern analysis...
#        Spawning research agent for JWT best practices...
#        Analyzing 23 relevant files...
#        Plan created → .claude/plans/auth.md
#
#        Please review and approve before execution.

# Review plan, then execute
/core_piv_loop:execute
# Agent: Reading plan: auth.md
#        Step 1/8: Create User model... ✓
#        Step 2/8: Implement JWT utilities... ✓
#        Running tests after step 2... ✓

3. Validate

Ensure code quality, correctness, and no regressions

  • Linting and formatting
  • Type checking
  • Unit tests
  • Integration tests
  • UI tests (with agent-browser)
  • Build verification

Command: /validation:validate

What Validate Does:

  1. Linting: Runs ruff, eslint, prettier
  2. Type checking: mypy (Python), tsc (TypeScript)
  3. Unit tests: pytest, vitest with coverage reports
  4. Integration tests: API endpoints, database operations
  5. UI tests: agent-browser scenarios (if configured)
  6. Build: Frontend and backend compilation
  7. Generates report: Summary of all checks

Agents Used:

  • Bash Agent (main): Executes test commands, linters, builds
  • General-Purpose Agent: Analyzes results, generates report
  • Can run multiple test suites in parallel for speed

Related Validation Skills:

  • /validation:code-review - AI code review (uses General-Purpose Agent)
  • /validation:code-review-fix - Auto-fix issues (uses General-Purpose Agent + Edit tool)
  • /ui-test - UI testing only (uses Bash Agent for agent-browser)
  • /uat-test - User acceptance tests (uses Bash Agent for agent-browser)
  • /regression-test - Full regression suite (spawns multiple Bash Agents in parallel)

Example:

/validation:validate
# Agent: Running validation suite...
#        [Bash] Running ruff linter... ✓
#        [Bash] Running pytest (parallel)... ✓
#        [Bash] Running vitest (parallel)... ✓
#        [Bash] Building frontend... ✓
#
#        ✓ Linting passed (0 errors)
#        ✓ Type checking passed
#        ✓ Unit tests: 52/52 passed (95% coverage)
#        ✓ Integration tests: 15/15 passed
#        ✓ Build successful
#
#        All validation checks passed! ✓

Why PIV?

Traditional Approach ❌:

Code → Test → Debug → More Code → Test → Debug → ...
(Chaotic, reactive, error-prone)

PIV Approach ✅:

Prime → Plan → Execute → Validate → (Iterate)
(Systematic, proactive, quality-focused)

Benefits

Benefit How PIV Helps
Fewer Bugs Validation catches issues early
Better Design Planning phase prevents rushed implementations
Faster Development AI handles implementation details while you guide
Higher Quality Systematic validation ensures standards
Easier Debugging Clear context from Prime phase
Better Documentation Plans serve as implementation docs

Complete PIV Workflow Example

# PRIME: Load project context
/core_piv_loop:prime

# IMPLEMENT: Plan feature
/core_piv_loop:plan-feature "Add email notifications"
# Review plan → Approve

# IMPLEMENT: Execute
/core_piv_loop:execute

# VALIDATE: Run full validation
/validation:validate

# VALIDATE: UI testing
/ui-test

# If issues found:
/validation:code-review-fix

# Commit when all pass
/commit

The Loop Concept

PIV is a loop, not a linear process:

  1. Prime → Understand current state
  2. Implement → Make changes
  3. Validate → Verify changes
  4. Loop back to Prime → Update context for next change

Each iteration builds on the previous, maintaining context and quality throughout development


Agents Used in PIV Loop

Here's a quick reference showing which agents are used in each phase:

Phase Primary Agent Spawned Agents Tools Used
Prime Claude (direct) None Bash (git, ls, tree)
Read (docs, code)
Glob (file patterns)
Plan Plan Agent Explore Agent
General-Purpose Agent
(parallel execution)
Read (codebase analysis)
Grep (pattern search)
WebFetch (research)
Execute General-Purpose Agent Bash Agent Read, Write, Edit
Bash (tests, git)
Validate Bash Agent Multiple Bash Agents
(parallel testing)
Bash (test runners)
Read (reports)

Key Points:

  • Prime runs directly without spawning subagents (fastest)
  • Plan spawns multiple agents in parallel for comprehensive analysis
  • Execute runs sequentially with validation at each step
  • Validate can run test suites in parallel for speed

Agent Capabilities Summary:

Agent Type Can Read Can Write Can Execute Best For
General-Purpose Implementation, complex tasks
Plan Read-only analysis, planning
Explore Fast codebase searches
Bash Limited Commands, tests, git operations

Framework Hierarchy

This section provides a complete hierarchy of processes, commands, skills, references, agents, and when to use each component.

Master Decision Tree

┌─────────────────────────────────────────────────────────┐
│                   AI DEVELOPMENT FRAMEWORK              │
│                                                         │
│  Start Here → What do I need to do?                    │
└─────────────────────────────────────────────────────────┘
                          |
        ┌─────────────────┼─────────────────┐
        |                 |                 |
    ┌───▼───┐         ┌───▼────┐      ┌───▼──────┐
    │ PRIME │         │IMPLEMENT│      │ VALIDATE │
    └───┬───┘         └───┬────┘      └───┬──────┘
        |                 |                 |
        └─────────────────┴─────────────────┘
                    PIV LOOP

Complete Framework Hierarchy

📋 AI DEVELOPMENT FRAMEWORK
│
├── 🔄 PROCESSES (PIV Loop)
│   │
│   ├── 1️⃣ PRIME
│   │   ├── Purpose: Load context & understand project
│   │   ├── Command: /core_piv_loop:prime
│   │   ├── Agents: Claude (direct), No subagents
│   │   ├── Tools: Bash, Read, Glob
│   │   ├── References Read:
│   │   │   ├── CLAUDE.md (project instructions)
│   │   │   ├── .claude/docs/PRD.md (requirements)
│   │   │   ├── README.md (overview)
│   │   │   └── .claude/reference/* (all best practices)
│   │   └── When to Use:
│   │       ├── Starting work on project
│   │       ├── After being away from project
│   │       ├── After context window cleared
│   │       └── Before planning new features
│   │
│   ├── 2️⃣ IMPLEMENT
│   │   │
│   │   ├── 2a. PLAN Phase
│   │   │   ├── Purpose: Design implementation approach
│   │   │   ├── Command: /core_piv_loop:plan-feature "description"
│   │   │   ├── Agents:
│   │   │   │   ├── Plan Agent (primary, read-only)
│   │   │   │   ├── Explore Agent (spawned, parallel)
│   │   │   │   └── General-Purpose Agent (spawned, parallel)
│   │   │   ├── Tools: Read, Grep, Glob, WebFetch
│   │   │   ├── References Read:
│   │   │   │   ├── CLAUDE.md (conventions)
│   │   │   │   ├── Similar code in codebase
│   │   │   │   ├── .claude/reference/* (tech-specific patterns)
│   │   │   │   └── External docs (via WebFetch)
│   │   │   ├── Output: .claude/plans/feature-name.md
│   │   │   └── When to Use:
│   │   │       ├── Before any non-trivial feature
│   │   │       ├── When multiple approaches exist
│   │   │       ├── When architecture decisions needed
│   │   │       └── After Prime, before Execute
│   │   │
│   │   └── 2b. EXECUTE Phase
│   │       ├── Purpose: Implement approved plan
│   │       ├── Command: /core_piv_loop:execute
│   │       ├── Agents:
│   │       │   ├── General-Purpose Agent (primary)
│   │       │   └── Bash Agent (spawned for tests/git)
│   │       ├── Tools: Read, Write, Edit, Bash
│   │       ├── References Read:
│   │       │   ├── .claude/plans/feature-name.md (the plan)
│   │       │   └── .claude/reference/* (as needed)
│   │       └── When to Use:
│   │           ├── After plan approval
│   │           └── Ready to write code
│   │
│   └── 3️⃣ VALIDATE
│       ├── Purpose: Ensure quality & no regressions
│       ├── Commands:
│       │   ├── /validation:validate (full suite)
│       │   ├── /validation:code-review (AI review)
│       │   ├── /validation:code-review-fix (auto-fix)
│       │   ├── /ui-test (UI testing)
│       │   ├── /uat-test (user acceptance)
│       │   └── /regression-test (full regression)
│       ├── Agents:
│       │   ├── Bash Agent (primary, for tests)
│       │   ├── Multiple Bash Agents (parallel test suites)
│       │   └── General-Purpose (for code review)
│       ├── Tools: Bash (test runners), Read (reports)
│       ├── References Used:
│       │   ├── Test results
│       │   ├── Linter configs
│       │   └── Coverage reports
│       └── When to Use:
│           ├── After implementing features
│           ├── Before creating PRs
│           ├── After fixing bugs
│           └── Before deployment
│
├── 🎯 SKILLS / COMMANDS
│   │
│   ├── Core PIV Loop
│   │   ├── /core_piv_loop:prime
│   │   ├── /core_piv_loop:plan-feature
│   │   └── /core_piv_loop:execute
│   │
│   ├── Validation
│   │   ├── /validation:validate
│   │   ├── /validation:code-review
│   │   └── /validation:code-review-fix
│   │
│   ├── Testing
│   │   ├── /ui-test (agent-browser UI tests)
│   │   ├── /uat-test (user acceptance tests)
│   │   └── /regression-test (full suite)
│   │
│   ├── Bug Fixing
│   │   ├── /gitlab_bug_fix:rca (root cause analysis)
│   │   └── /gitlab_bug_fix:implement-fix
│   │
│   └── Utilities
│       ├── /commit (atomic git commits)
│       ├── /init-project (setup & start)
│       └── /create-prd (generate PRD)
│
├── 🤖 AGENTS
│   │
│   ├── General-Purpose Agent
│   │   ├── Capabilities: Read ✅, Write ✅, Execute ✅
│   │   ├── Used In: Execute phase, code review fixes
│   │   ├── When to Use: Implementation, complex multi-step tasks
│   │   └── Can Spawn: Any other agent
│   │
│   ├── Plan Agent
│   │   ├── Capabilities: Read ✅, Write ❌, Execute ❌
│   │   ├── Used In: Plan phase
│   │   ├── When to Use: Read-only analysis, architecture design
│   │   └── Can Spawn: Explore, General-Purpose (for research)
│   │
│   ├── Explore Agent
│   │   ├── Capabilities: Read ✅ (fast), Write ❌, Execute ❌
│   │   ├── Used In: Plan phase (pattern searching)
│   │   ├── When to Use: Fast codebase searches, pattern finding
│   │   ├── Thoroughness Levels:
│   │   │   ├── quick (basic search)
│   │   │   ├── medium (moderate exploration)
│   │   │   └── very thorough (comprehensive)
│   │   └── Can Spawn: None
│   │
│   └── Bash Agent
│       ├── Capabilities: Read (limited), Write ❌, Execute ✅
│       ├── Used In: Validate phase, Execute phase (for tests/git)
│       ├── When to Use: Running commands, tests, git operations
│       └── Can Spawn: None (pure command execution)
│
├── 📚 REFERENCES
│   │
│   ├── Project Documentation
│   │   ├── CLAUDE.md
│   │   │   ├── Purpose: AI instructions, conventions, commands
│   │   │   ├── Read By: Prime, Plan agents
│   │   │   └── When Created: During prime if missing
│   │   │
│   │   ├── .claude/docs/PRD.md
│   │   │   ├── Purpose: Product requirements, features, acceptance criteria
│   │   │   ├── Read By: Prime, Plan agents
│   │   │   └── When Created: /create-prd or during prime
│   │   │
│   │   └── README.md
│   │       ├── Purpose: Project overview, setup instructions
│   │       └── Read By: Prime agent
│   │
│   ├── Best Practices (.claude/reference/)
│   │   ├── reference/backend/fastapi-best-practices.md
│   │   │   ├── Purpose: API patterns, routing, schemas
│   │   │   ├── Read By: Plan, Execute agents
│   │   │   └── When to Use: Building FastAPI endpoints
│   │   │
│   │   ├── postgres-best-practices.md
│   │   │   ├── Purpose: Database setup, pooling, queries
│   │   │   ├── Read By: Plan, Execute agents
│   │   │   └── When to Use: Database operations
│   │   │
│   │   ├── react-frontend-best-practices.md
│   │   │   ├── Purpose: Components, hooks, state management
│   │   │   ├── Read By: Plan, Execute agents
│   │   │   └── When to Use: Building React web components
│   │   │
│   │   ├── flutter-best-practices.md
│   │   │   ├── Purpose: Widgets, Riverpod state, Clean Architecture
│   │   │   ├── Read By: Plan, Execute agents
│   │   │   └── When to Use: Building Flutter mobile/cross-platform apps
│   │   │
│   │   ├── testing-and-logging.md
│   │   │   ├── Purpose: Test patterns, structlog setup
│   │   │   ├── Read By: Plan, Execute agents
│   │   │   └── When to Use: Writing tests, logging
│   │   │
│   │   └── deployment-best-practices.md
│   │       ├── Purpose: Docker, CI/CD, production setup
│   │       ├── Read By: Plan agent
│   │       └── When to Use: Deployment tasks
│   │
│   └── Implementation Plans (.claude/plans/)
│       ├── Purpose: Step-by-step implementation guides
│       ├── Created By: Plan agent
│       ├── Read By: Execute agent
│       └── Format: Markdown with phases, tasks, validation
│
└── 🔧 MCP SERVERS (External Capabilities)
    │
    ├── Playwright MCP
    │   ├── Purpose: Browser automation, E2E testing
    │   ├── Used By: Validate phase (/ui-test, /uat-test)
    │   └── When to Use: Visual testing, user flows
    │
    ├── GitLab MCP
    │   ├── Purpose: Issue tracking, MR management
    │   ├── Used By: Bug fixing skills
    │   └── When to Use: /gitlab_bug_fix:rca, issue management
    │
    ├── PostgreSQL MCP
    │   ├── Purpose: Database queries, schema inspection
    │   ├── Used By: Plan, Execute agents
    │   └── When to Use: Database analysis, migrations
    │
    └── agent-browser
        ├── Purpose: AI-driven UI testing
        ├── Used By: /ui-test, /uat-test, /regression-test
        └── When to Use: Natural language test scenarios

Decision Tree: What Should I Use?

┌─────────────────────────────────────────┐
│ What do I need to do?                   │
└───────────────┬─────────────────────────┘
                |
    ┌───────────┼───────────┬─────────────┬───────────┐
    |           |           |             |           |
    v           v           v             v           v
┌───────┐  ┌────────┐  ┌────────┐  ┌─────────┐  ┌────────┐
│Starting│  │Implement│  │ Fix a  │  │ Test   │  │Commit  │
│Project │  │Feature  │  │  Bug   │  │Changes │  │Changes │
└───┬───┘  └────┬───┘  └───┬────┘  └────┬────┘  └───┬────┘
    |           |           |            |           |
    v           v           v            v           v
/prime      /plan-      /gitlab_     /validation   /commit
            feature     bug_fix:rca   :validate
                |           |
                v           v
            /execute    /gitlab_bug_
                        fix:implement
                            |
                            v
                        /validation
                        :validate

When to Use Each Component

Use /core_piv_loop:prime when:

  • ✅ Starting work on the project
  • ✅ Returning after being away
  • ✅ Context window has been cleared
  • ✅ Before planning any feature
  • ✅ After major git pull/merge

Use /core_piv_loop:plan-feature when:

  • ✅ Before implementing non-trivial features
  • ✅ Multiple implementation approaches exist
  • ✅ Architectural decisions are needed
  • ✅ You need a roadmap before coding
  • ❌ NOT for simple bug fixes (use RCA instead)

Use /core_piv_loop:execute when:

  • ✅ After reviewing and approving plan
  • ✅ Ready to write code
  • ✅ Plan document exists in .claude/plans/
  • ❌ NOT before creating a plan

Use /validation:validate when:

  • ✅ After implementing features
  • ✅ Before creating pull/merge requests
  • ✅ After major refactoring
  • ✅ Before deployment
  • ✅ When in doubt about code quality

Use /validation:code-review when:

  • ✅ After implementing complex logic
  • ✅ Before committing important changes
  • ✅ Want AI second opinion on code
  • ✅ Checking for security issues

Use /ui-test, /uat-test, /regression-test when:

  • ✅ After UI changes
  • ✅ Before releases
  • ✅ After major refactoring
  • ✅ Weekly/nightly in CI/CD

Use /gitlab_bug_fix:rca when:

  • ✅ You have a bug to investigate
  • ✅ Before implementing the fix
  • ✅ Root cause is unclear
  • ❌ NOT for simple typo fixes

Use /gitlab_bug_fix:implement-fix when:

  • ✅ After reviewing RCA document
  • ✅ Root cause is understood
  • ✅ Ready to implement the fix

Use /commit when:

  • ✅ Code is in committable state
  • ✅ After validation passes
  • ✅ Logical chunk of work is complete
  • ✅ Want AI to generate commit message

Parallel vs Sequential Execution

Parallel Execution (faster):

Plan Phase:
├── Explore Agent ─────┐
├── General Agent ─────┤ → Run simultaneously
└── Pattern Search ────┘

Validate Phase:
├── Unit Tests ────────┐
├── Integration Tests ─┤ → Run simultaneously
└── Linting ───────────┘

Sequential Execution (ordered):

Execute Phase:
Step 1 → Validate → Step 2 → Validate → Step 3
(Can't proceed to step 2 until step 1 validates)

Prime Phase:
Detect → Read Docs → Analyze → Report
(Each phase depends on previous)

Agent Spawning Hierarchy

┌──────────────────────────────────────┐
│ Claude (You interact here)           │
└────────────────┬─────────────────────┘
                 |
    ┌────────────┼────────────┬─────────────┐
    v            v            v             v
┌────────┐  ┌────────┐  ┌──────────┐  ┌──────┐
│ Prime  │  │  Plan  │  │ Execute  │  │Valid-│
│(Direct)│  │ Agent  │  │  Agent   │  │ ate  │
└────────┘  └───┬────┘  └────┬─────┘  └──┬───┘
                |            |           |
        ┌───────┴────┐       v           v
        v            v    ┌──────┐  ┌────────┐
    ┌────────┐  ┌────────┐│ Bash │  │Multiple│
    │Explore │  │General ││Agent │  │  Bash  │
    │ Agent  │  │ Agent  │└──────┘  │ Agents │
    └────────┘  └────────┘           └────────┘

Key Rules:

  1. Prime = Direct execution (no spawning)
  2. Plan = Can spawn Explore + General (parallel)
  3. Execute = Spawns Bash for tests/git (sequential)
  4. Validate = Spawns multiple Bash (parallel tests)

Framework Components

1. Agents

Specialized AI assistants with specific capabilities:

Agent Type Purpose When to Use
General-Purpose Multi-step tasks, complex queries Open-ended exploration, research
Plan Architecture design, implementation planning Before writing code for new features
Explore Fast codebase exploration Finding patterns, understanding structure
Bash Command execution specialist Git operations, running scripts

Example:

# The AI will automatically spawn agents when needed
"Find all API endpoints in the codebase and explain their purpose"
# → Spawns Explore agent for fast codebase search

2. Skills (Slash Commands)

Pre-configured workflows you can invoke with /command:

Category Skills Purpose
Planning /core_piv_loop:prime
/core_piv_loop:plan-feature
/core_piv_loop:execute
Context loading, feature planning, execution
Validation /validation:validate
/validation:code-review
/validation:code-review-fix
Testing, linting, code quality
Bug Fixing /gitlab_bug_fix:rca
/gitlab_bug_fix:implement-fix
Root cause analysis, systematic fixes
Utilities /commit
/init-project
/create-prd
Git operations, setup, documentation
Testing /ui-test
/uat-test
/regression-test
UI, UAT, regression testing with agent-browser

3. Reference Documentation

Structured knowledge stored in .claude/reference/:

.claude/reference/
├── backend/
│   └── fastapi-best-practices.md       # API patterns, routing, schemas
├── postgres-best-practices.md          # Database setup, pooling, queries
├── frontend/
│   ├── react-frontend-best-practices.md # React: Components, hooks, state
│   └── flutter-best-practices.md       # Flutter: Widgets, Riverpod, Clean Arch
├── testing-and-logging.md              # Test patterns, structlog
└── deployment-best-practices.md        # OpenShift, production, CI/CD

Purpose: Agents read these to understand your project's conventions and patterns before implementing.

4. Settings

Configure Claude Code behavior in .claude/settings.json:

{
  "agents": {
    "explore": { "max_turns": 20 },
    "plan": { "model": "opus" }
  },
  "skills": {
    "validation:validate": { "auto_fix": true }
  },
  "hooks": {
    "pre_commit": "npm run lint && npm run test"
  }
}

5. MCP (Model Context Protocol) Servers

External tools that extend Claude's capabilities:

MCP Server Purpose Installation
Playwright Browser automation, E2E testing claude mcp add playwright npx @playwright/mcp@latest
GitLab Issue tracking, MR management claude mcp add gitlab npx @modelcontextprotocol/server-gitlab
PostgreSQL Database queries, schema inspection claude mcp add postgres npx @modelcontextprotocol/server-postgres
Filesystem Advanced file operations Built-in

6. agent-browser

Automated UI testing tool for UAT, regression, and visual testing:

Installation

npm install -g agent-browser

Usage

// Define test scenarios in natural language
agent-browser test "
  1. Navigate to localhost:5173
  2. Click 'Add Item' button
  3. Fill in 'Task Name' as item name
  4. Click 'Save'
  5. Verify item appears in list
"

Integration as Skill

The framework includes agent-browser skills for automated testing:

  • /ui-test: Run UI functionality tests
  • /uat-test: User acceptance testing scenarios
  • /regression-test: Full regression suite across features

The PIV Loop

Phase 1: Prime

Goal: Load context and understand what you're building.

When to use:

  • Starting a new feature
  • After being away from the project
  • When context window is cleared

Command: /core_piv_loop:prime

What happens:

  1. Reads project structure and conventions
  2. Loads relevant reference documentation
  3. Understands tech stack and patterns
  4. Identifies critical files and dependencies

Output: Agent confirms understanding of project context.


Phase 2: Plan

Goal: Create a comprehensive implementation plan before writing code.

When to use:

  • Before implementing any non-trivial feature
  • When multiple approaches are possible
  • When architectural decisions are needed

Command: /core_piv_loop:plan-feature "Add user authentication"

What happens:

  1. Analyzes requirements and existing codebase
  2. Identifies affected files and dependencies
  3. Researches best practices from reference docs
  4. Creates step-by-step implementation plan
  5. Requests your approval before proceeding

Output: Detailed plan in .claude/plans/feature-name.md

Example Plan Structure:

# Feature: User Authentication

## Overview
Add JWT-based authentication with login/logout endpoints.

## Critical Files
- backend/app/auth.py (new)
- backend/app/models.py (modify)
- frontend/src/features/auth/ (new)

## Implementation Steps
### Step 1: Create User model
- Add User table with SQLAlchemy
- Include password hashing with bcrypt
...

### Step 2: Implement JWT utilities
- Create token generation/validation
- Add middleware for protected routes
...

Phase 3: Implement

Goal: Execute the plan systematically, step by step.

When to use:

  • After plan approval
  • When you're ready to write code

Command: /core_piv_loop:execute

What happens:

  1. Reads the approved plan
  2. Implements each step in order
  3. Validates each step before moving to the next
  4. Handles errors and rollbacks if needed
  5. Creates atomic git commits per logical chunk

Best Practices:

  • Review changes as they happen
  • Ask questions if implementation deviates from plan
  • Test incrementally rather than at the end

Phase 4: Validate

Goal: Ensure code quality, correctness, and no regressions.

When to use:

  • After implementing a feature
  • Before creating a PR
  • After fixing bugs

Command: /validation:validate

What happens:

  1. Linting: Runs code formatters and linters
  2. Type Checking: Verifies type correctness
  3. Unit Tests: Runs test suite with coverage
  4. Integration Tests: Tests API endpoints
  5. UI Tests: Runs agent-browser automation
  6. Build: Compiles frontend and backend
  7. Report: Generates validation summary

Example Output:

✓ Linting passed (0 errors)
✓ Type checking passed
✓ Unit tests: 45/45 passed (100% coverage)
✓ Integration tests: 12/12 passed
✓ UI tests: 8/8 scenarios passed
✓ Frontend build successful
✓ Backend build successful

All validation checks passed! ✓

Getting Started

Prerequisites

# Required
- Python 3.11+
- Node.js 18+
- uv package manager: curl -LsSf https://astral.sh/uv/install.sh | sh
- Claude Code CLI: npm install -g @anthropics/claude-code

# Optional but recommended
- Docker Desktop
- PostgreSQL 16+
- agent-browser: npm install -g agent-browser

Initial Setup

# 1. Install Claude Code
npm install -g @anthropics/claude-code

# 2. Clone your project
git clone <your-repo>
cd <your-project>

# 3. Initialize with Claude
claude init

# 4. Install agent-browser for UI testing
npm install -g agent-browser

# 5. Add MCP servers
claude mcp add playwright npx @playwright/mcp@latest
claude mcp add gitlab npx @modelcontextprotocol/server-gitlab

# 6. Prime the agent with your project
/core_piv_loop:prime

Project Workflow

Starting a New Project

# 1. Create project structure
mkdir my-project && cd my-project
claude init

# 2. Create PRD (Product Requirements Document)
/create-prd
# Then describe your project in conversation

# 3. Initialize project structure
/init-project

# 4. Create reference documentation
mkdir -p .claude/reference
# Add your tech stack best practices docs

Working with Existing Projects (Brownfield)

# 1. Clone existing project
git clone <existing-repo>
cd <existing-project>

# 2. Copy .claude/ template
cp -r /path/to/template/.claude .

# 3. Analyze existing codebase and generate documentation
/brownfield:analyze
# This will generate:
# - .claude/docs/PRD.md (what the app does)
# - .claude/docs/CLAUDE.md (AI instructions)
# - .claude/docs/ARCHITECTURE.md (system design)
# - .claude/docs/API.md (endpoints)
# - .claude/docs/FEATURES.md (feature list)

# 4. Review and refine generated documentation
# AI provides ~90-95% accuracy, verify and add business context

# 5. Prime Claude with the analyzed project
/core_piv_loop:prime
# Claude now fully understands your existing codebase

# 6. Continue with normal workflow
/core_piv_loop:plan-feature "Add new feature"

What Gets Analyzed:

  • ✅ All source code (backend, frontend, infrastructure)
  • ✅ API endpoints and routes
  • ✅ Database models and relationships
  • ✅ Component hierarchy
  • ✅ Configuration files
  • ✅ CI/CD pipelines
  • ✅ Dependencies and tech stack
  • ✅ Security implementations
  • ✅ Git history (recent commits)

Generated Documentation Includes:

  • Complete feature inventory
  • API specification
  • Database schema
  • Architecture diagrams (textual)
  • Coding conventions detected
  • Development setup instructions
  • Testing strategy
  • Deployment configuration

Adding a New Feature

# 1. Prime (if needed)
/core_piv_loop:prime

# 2. Plan the feature
/core_piv_loop:plan-feature "Feature description"
# Review the plan in .claude/plans/

# 3. Approve and execute
/core_piv_loop:execute

# 4. Validate
/validation:validate

# 5. UI Testing
/ui-test

# 6. Commit
/commit

Fixing a Bug

# 1. Create GitLab issue (if not exists)
# 2. Run root cause analysis
/gitlab_bug_fix:rca <issue-number>

# 3. Review RCA document
# 4. Implement fix
/gitlab_bug_fix:implement-fix <issue-number>

# 5. Validate
/validation:validate

# 6. Regression test
/regression-test

# 7. Commit and close issue
/commit

Refactoring

# 1. Prime with current context
/core_piv_loop:prime

# 2. Plan refactoring
/core_piv_loop:plan-feature "Refactor auth module to use dependency injection"

# 3. Execute with caution
/core_piv_loop:execute

# 4. Comprehensive validation
/validation:validate
/regression-test

# 5. Code review
/validation:code-review

# 6. Fix any issues
/validation:code-review-fix

# 7. Commit
/commit

Skills (Commands)

Core PIV Loop

/core_piv_loop:prime

Purpose: Load project context into agent's working memory.

When to use:

  • Starting work on the project
  • After context reset
  • Before planning a feature

What it does:

  • Reads .claude/docs/PRD.md
  • Loads reference documentation
  • Analyzes project structure
  • Understands tech stack

Example:

You: /core_piv_loop:prime
Agent: Loading project context...
       ✓ Read PRD (598 lines)
       ✓ Loaded 5 reference documents
       ✓ Analyzed project structure
       Ready to work on: Template Project

/core_piv_loop:plan-feature

Purpose: Create comprehensive implementation plan.

When to use:

  • Before any non-trivial implementation
  • When multiple approaches exist
  • When you need architectural guidance

Parameters: Feature description (string)

What it does:

  1. Analyzes requirements
  2. Explores codebase for relevant patterns
  3. Identifies affected files
  4. Creates step-by-step plan
  5. Requests approval

Example:

You: /core_piv_loop:plan-feature "Add email notifications for user milestones"
Agent: Planning feature...
       - Analyzed 15 related files
       - Reviewed email best practices
       - Created 8-step implementation plan
       Plan saved: .claude/plans/email-notifications.md
       Ready for your review and approval.

/core_piv_loop:execute

Purpose: Execute approved implementation plan.

When to use:

  • After approving a plan
  • Ready to implement

What it does:

  1. Reads plan from .claude/plans/
  2. Implements step by step
  3. Validates each step
  4. Creates git commits
  5. Handles errors gracefully

Example:

You: /core_piv_loop:execute
Agent: Executing plan: email-notifications.md
       Step 1/8: Create EmailService class... ✓
       Step 2/8: Add notification preferences to User model... ✓
       Step 3/8: Implement streak check logic... ✓
       ...
       All steps completed successfully!

Validation Skills

/validation:validate

Purpose: Run full validation suite.

When to use:

  • After implementing features
  • Before creating PRs
  • Before deployment

What it does:

1. Linting (ruff, eslint)
2. Type checking (mypy, TypeScript)
3. Unit tests (pytest, vitest)
4. Integration tests
5. Coverage report
6. Frontend build
7. Backend build

Example:

You: /validation:validate
Agent: Running comprehensive validation...
       ✓ Linting: 0 errors
       ✓ Type checking: passed
       ✓ Unit tests: 52/52 passed (95% coverage)
       ✓ Integration tests: 15/15 passed
       ✓ Frontend build: 2.3s
       ✓ Backend build: 1.1s

       All checks passed! ✓

/validation:code-review

Purpose: Technical code review on changed files.

When to use:

  • Before committing
  • After implementing complex logic
  • When you want a second opinion

What it does:

  • Analyzes changed files
  • Checks for bugs, security issues
  • Reviews code style and patterns
  • Suggests improvements

Example:

You: /validation:code-review
Agent: Reviewing 3 changed files...

       app/routers/items.py:
       ✓ No issues found

       app/services/email.py:
       ⚠ Warning: Missing error handling for SMTP failures (line 45)
       ⚠ Warning: Hardcoded email template (line 78)

       tests/test_email.py:
       ✓ No issues found

       2 warnings found.

/validation:code-review-fix

Purpose: Fix issues found in code review.

When to use:

  • After /validation:code-review finds issues
  • When you want automated fixes

What it does:

  • Reads code review output
  • Implements fixes for each issue
  • Re-runs validation
  • Confirms fixes resolved issues

Bug Fixing Skills

/gitlab_bug_fix:rca

Purpose: Create root cause analysis document.

Parameters: GitLab issue number

When to use:

  • When you have a bug to fix
  • Before implementing the fix

What it does:

  1. Reads GitLab issue
  2. Explores codebase to find root cause
  3. Analyzes related code
  4. Creates RCA document with:
    • Problem description
    • Root cause analysis
    • Proposed solution
    • Testing strategy

Example:

You: /gitlab_bug_fix:rca 42
Agent: Analyzing issue #42: "Calculation error in date range processing"

       Exploring codebase...
       Found calculation logic in: app/routers/items.py:850
       Analyzing logic...

       Root cause: calculate_range() doesn't account for excluded dates
       in the date range between entries.

       RCA document created: .agents/rca/issue-42.md

/gitlab_bug_fix:implement-fix

Purpose: Implement fix based on RCA document.

Parameters: GitLab issue number

When to use:

  • After reviewing RCA document
  • Ready to fix the bug

What it does:

  1. Reads RCA document
  2. Implements proposed solution
  3. Adds/updates tests
  4. Validates fix
  5. Updates issue with fix details

Testing Skills

/ui-test

Purpose: Run UI functionality tests with agent-browser.

When to use:

  • After UI changes
  • Before releasing features
  • For regression testing

What it does:

  1. Starts local server
  2. Runs agent-browser test scenarios
  3. Captures screenshots
  4. Reports results

Example:

You: /ui-test
Agent: Running UI tests with agent-browser...

       ✓ Login flow (2.3s)
       ✓ Create item (1.8s)
       ✓ Update item (1.2s)
       ✓ View dashboard (2.1s)
       ✗ Delete item (failed: button not found)

       4/5 tests passed
       Screenshots saved: tests/screenshots/

/uat-test

Purpose: User acceptance testing scenarios.

When to use:

  • Before major releases
  • After implementing user-facing features
  • For stakeholder demos

What it does:

  1. Reads UAT scenarios from tests/uat/scenarios.yml
  2. Executes with agent-browser
  3. Generates report with screenshots
  4. Records video walkthroughs

Example scenarios file:

# tests/uat/scenarios.yml
scenarios:
  - name: "New user onboarding"
    steps:
      - "Navigate to localhost:5173"
      - "Verify welcome message displays"
      - "Click 'Get Started'"
      - "Create first item 'Sample Task'"
      - "Verify item appears in dashboard"

  - name: "Complete workflow over 7 days"
    steps:
      - "Login as test user"
      - "For each day 1-7: mark 'Sample Task' complete"
      - "Verify counter shows '7 days'"
      - "Verify completion rate is 100%"

/regression-test

Purpose: Full regression suite across all features.

When to use:

  • Before releases
  • After major refactoring
  • Weekly/nightly runs

What it does:

  1. Runs all unit tests
  2. Runs all integration tests
  3. Runs all UI tests with agent-browser
  4. Compares results with baseline
  5. Highlights any regressions

Example:

You: /regression-test
Agent: Running full regression suite...

       Unit Tests:
       ✓ 52/52 passed

       Integration Tests:
       ✓ 15/15 passed

       UI Tests (agent-browser):
       ✓ 18/20 passed
       ✗ Calendar view: Date picker regression
       ✗ Export feature: CSV download failed

       Regression detected: 2 previously passing tests now fail
       Details: tests/regression-report.md

Utility Skills

/commit

Purpose: Create atomic git commit with appropriate tag.

When to use:

  • After implementing a feature
  • After fixing a bug
  • When code is in a committable state

What it does:

  1. Runs git status and git diff
  2. Analyzes changes
  3. Generates commit message with conventional commit format
  4. Adds co-author attribution
  5. Creates commit

Example:

You: /commit
Agent: Analyzing changes...
       Files changed: 3 (app/routers/items.py, tests/test_items.py, README.md)

       Proposed commit:
       ────────────────
       feat(items): add archive functionality

       - Add PATCH /items/{id}/archive endpoint
       - Add archived_at timestamp field
       - Filter archived items by default in list endpoint
       - Add tests for archive functionality

       Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
       ────────────────

       Proceed with commit? [Y/n]

/init-project

Purpose: Install dependencies and start servers.

When to use:

  • First time setup
  • After pulling changes
  • After adding dependencies

What it does:

# Backend
cd backend && uv sync

# Frontend
cd frontend && npm install

# Start servers (in separate processes)
uvicorn app.main:app --reload --port 8000 &
npm run dev &

/create-prd

Purpose: Generate Product Requirements Document from conversation.

When to use:

  • Starting a new project
  • Planning major features
  • Documenting requirements

What it does:

  1. Analyzes conversation for requirements
  2. Extracts features, user stories, acceptance criteria
  3. Creates structured PRD in .claude/docs/PRD.md
  4. Includes API specs, database schema, tech stack

DevSecOps

The framework includes comprehensive security scanning and compliance checking integrated into your development workflow.

Security Scanning Skills

/devsecops:sast

Purpose: Static Application Security Testing - scan source code for vulnerabilities.

What it scans:

  • SQL Injection vulnerabilities
  • XSS (Cross-Site Scripting)
  • Command Injection
  • Hardcoded secrets (complementary to secrets-scan)
  • Insecure cryptography
  • Authentication/authorization flaws

Tools used:

  • Semgrep (primary) - Fast, accurate, language-agnostic
  • Bandit (Python-specific)
  • ESLint with security plugins (JavaScript/TypeScript)

Example:

/devsecops:sast

# Output:
✓ Scanned 342 files
⚠️ Found 3 issues:
  - HIGH: SQL injection risk in backend/app/queries.py:42
  - MEDIUM: Missing input validation in frontend/src/api/items.ts:18
  - LOW: Weak hash algorithm in backend/app/crypto.py:15

Report: .claude/reports/security/sast-report.json

/devsecops:dependency-scan

Purpose: Scan dependencies for known vulnerabilities (CVEs).

What it scans:

  • Python packages (requirements.txt, pyproject.toml)
  • Node.js packages (package.json, package-lock.json)
  • Known CVEs in dependencies
  • Outdated packages with security fixes

Tools used:

  • Safety (Python)
  • pip-audit (Python)
  • npm audit (Node.js)
  • Snyk (optional, comprehensive)

Example:

/devsecops:dependency-scan

# Output:
✓ Scanned 156 packages
⚠️ Found 4 vulnerable dependencies:
  - CRITICAL: requests@2.28.0 → CVE-2023-32681 (upgrade to 2.31.0)
  - HIGH: axios@1.4.0 → CVE-2023-45857 (upgrade to 1.6.0)
  - MEDIUM: pillow@9.5.0 → CVE-2023-50447 (upgrade to 10.2.0)

Auto-fix available: npm audit fix
Report: .claude/reports/security/dependency-scan.json

/devsecops:secrets-scan

Purpose: Detect hardcoded secrets, API keys, passwords, and tokens.

What it detects:

  • AWS access keys
  • Database passwords
  • API keys (Stripe, GitHub, GitLab, etc.)
  • Private SSH keys
  • JWT tokens
  • OAuth tokens
  • Encryption keys

Tools used:

  • Gitleaks (primary, recommended)
  • TruffleHog (deep git history scanning)
  • GitGuardian (commercial option)

Example:

/devsecops:secrets-scan

# Output:
✓ Scanned 342 files + git history (1,523 commits)
❌ Found 2 secrets:
  - CRITICAL: AWS Access Key in backend/config.py:12
  - HIGH: Database password in .env.example:8 (should use placeholder)

⚠️ Action Required:
  1. Rotate compromised secrets immediately
  2. Remove from git history (use BFG Repo-Cleaner)
  3. Add to .gitignore

Report: .claude/reports/security/secrets-scan.json

/devsecops:container-scan

Purpose: Scan container images for vulnerabilities and misconfigurations.

What it scans:

  • OS package vulnerabilities
  • Application dependencies in images
  • Dockerfile misconfigurations
  • CIS Docker Benchmark compliance
  • Image layers and size optimization

Tools used:

  • Trivy (recommended, comprehensive)
  • Grype (alternative)
  • Docker Bench (CIS benchmarks)

Example:

/devsecops:container-scan

# Output:
✓ Scanned image: myapp-backend:latest
⚠️ Found 8 vulnerabilities:
  - CRITICAL: openssl@3.0.10 → CVE-2023-12345 (upgrade to 3.0.11)
  - HIGH: Running as root user (use non-root USER directive)
  - MEDIUM: No health check defined

✓ Scanned image: myapp-frontend:latest
✓ No vulnerabilities found

Report: .claude/reports/security/container-scan.json

/devsecops:compliance-check

Purpose: Validate against security compliance standards.

Standards supported:

  • OWASP Top 10 - Web application security
  • CIS Docker Benchmark - Container security
  • CIS Kubernetes Benchmark - OpenShift/K8s security
  • PCI-DSS - Payment card industry (if applicable)
  • GDPR - Data privacy (checklist)
  • SOC 2 - Trust service criteria

Tools used:

  • OWASP ZAP (web application scanning)
  • Checkov (infrastructure as code)
  • InSpec (custom compliance policies)
  • kube-bench (Kubernetes CIS)

Example:

/devsecops:compliance-check

# Output:
✓ OWASP Top 10: 10/10 passed ✓
✓ CIS Docker: 11/12 passed (92%) ⚠️
  - WARNING: Content trust not enabled
✓ CIS Kubernetes: 8/8 passed ✓
✓ PCI-DSS: 12/12 passed ✓
✓ GDPR: 8/8 passed ✓

Overall Compliance Score: 95%

Report: .claude/reports/compliance/compliance-report.md

Security in CI/CD Pipeline

All DevSecOps scans integrate with GitLab CI:

# .gitlab-ci.yml
security:
  stage: security
  script:
    - /devsecops:sast
    - /devsecops:dependency-scan
    - /devsecops:secrets-scan
    - /devsecops:container-scan
    - /devsecops:compliance-check
  artifacts:
    reports:
      sast: gl-sast-report.sarif
      dependency_scanning: gl-dependency-scan.json
      secret_detection: gl-secrets-report.sarif
      container_scanning: gl-container-scan.sarif
  allow_failure: false  # Block merge if security issues found

Security Workflow

Before Commit (Pre-commit Hook)

# Automatically runs:
/devsecops:secrets-scan  # Prevent committing secrets

Before Merge Request

# Manual or CI-triggered:
/devsecops:sast
/devsecops:dependency-scan
/devsecops:secrets-scan

Before Deployment

# Full security validation:
/devsecops:container-scan  # Scan images
/devsecops:compliance-check  # Validate standards

Monthly Security Audit

# Comprehensive review:
/devsecops:sast
/devsecops:dependency-scan
/devsecops:secrets-scan
/devsecops:container-scan
/devsecops:compliance-check

# Generate executive summary

Agents & Subagents

When to Use Agents

The AI automatically spawns agents when appropriate, but you can manually request them:

"Use the Explore agent to find all database queries in the codebase"
"Spawn a Plan agent to design the authentication system"
"Run a Bash agent to set up git hooks"

Agent Types

General-Purpose Agent

Capabilities: Full toolkit access, multi-step reasoning, complex tasks

Use for:

  • Open-ended questions
  • Multi-file changes
  • Research and exploration
  • Complex refactoring

Example:

"Refactor the authentication module to use dependency injection"
→ Spawns general-purpose agent with full access to Read, Edit, Write, Bash

Plan Agent

Capabilities: All tools except Edit/Write (read-only), deep analysis

Use for:

  • Feature planning
  • Architecture decisions
  • Research before implementation

Example:

/core_piv_loop:plan-feature "Add WebSocket support for real-time updates"
→ Spawns Plan agent to explore codebase and create implementation plan

Explore Agent

Capabilities: Fast codebase exploration (Glob, Grep, Read)

Use for:

  • Finding patterns
  • Understanding code structure
  • Quick searches

Thoroughness levels:

  • quick: Basic search
  • medium: Moderate exploration
  • very thorough: Comprehensive analysis

Example:

"Use Explore agent to find all API endpoints and their authentication requirements"
→ Spawns Explore agent for fast pattern matching

Bash Agent

Capabilities: Command execution specialist

Use for:

  • Git operations
  • Running scripts
  • Package management
  • Build processes

Example:

"Use Bash agent to create a pre-commit hook that runs linting"
→ Spawns Bash agent for git hook setup

Subagent Patterns

Sequential Agents (one after another):

1. Explore agent finds relevant files
2. Plan agent designs solution
3. General agent implements
4. Bash agent runs tests

Parallel Agents (multiple at once):

"Run tests on backend AND frontend in parallel"
→ Spawns 2 Bash agents concurrently

Configuration Settings

The framework includes comprehensive configuration in .claude/config/settings.json (604 lines) covering all aspects of development, security, and deployment.

Settings Overview

{
  "permissions": { /* Tool access control */ },
  "hooks": { /* Automation triggers */ },
  "mcpServers": { /* GitLab, Playwright, PostgreSQL */ },
  "env": { /* Environment variables */ },
  "piv": { /* PIV workflow settings */ },
  "validation": { /* Linting, testing, build */ },
  "code_review": { /* Quality gates */ },
  "gitlab": { /* MR/issue templates, CI/CD */ },
  "openshift": { /* Cluster configs, resources */ },
  "testing_framework": { /* agent-browser, thresholds */ },
  "devsecops": { /* Security scanning */ },
  "brownfield_analysis": { /* Codebase analysis */ },
  "reporting": { /* Report generation */ },
  "notifications": { /* Slack, email alerts */ },
  "custom_commands": { /* Shortcuts */ },
  "advanced": { /* Performance, caching */ }
}

Key Configuration Sections

Permissions

Control which tools and commands Claude can use:

{
  "permissions": {
    "allow": [
      "WebSearch",
      "Read", "Write", "Edit", "Glob", "Grep",
      "Bash(git *)", "Bash(npm *)", "Bash(oc *)"
    ],
    "deny": [
      "Bash(rm -rf /)",
      "Bash(git push --force*)"
    ]
  }
}

Hooks

Automate actions before/after tool usage:

{
  "hooks": {
    "post_write": "echo 'File modified: {file_path}'",
    "post_commit": "echo 'Committed: {commit_message}'"
  }
}

PIV Workflow

Customize the Prime → Implement → Validate workflow:

{
  "piv": {
    "prime": {
      "auto_read_docs": true,
      "docs_to_read": [".claude/docs/PRD.md", "README.md"]
    },
    "plan": {
      "output_dir": ".claude/plans/active",
      "require_user_approval": true
    },
    "execute": {
      "auto_validate_after": true,
      "move_to_completed": true
    },
    "validate": {
      "run_linting": true,
      "run_tests": true,
      "generate_report": true
    }
  }
}

Validation Settings

Configure linting, testing, and build validation:

{
  "validation": {
    "linting": {
      "backend": {
        "command": "cd backend && uv run ruff check ."
      },
      "frontend": {
        "command": "cd frontend && npm run lint"
      }
    },
    "testing": {
      "backend": {
        "unit": {
          "command": "cd backend && uv run pytest tests/unit -v",
          "coverage_threshold": 80
        }
      }
    }
  }
}

OpenShift Configuration

Multi-environment deployment configuration:

{
  "openshift": {
    "clusters": {
      "dev": {
        "url": "https://api.dev-cluster.openshift.com:6443",
        "project": "myapp-dev"
      },
      "prod": {
        "url": "https://api.prod-cluster.openshift.com:6443",
        "project": "myapp-prod",
        "require_approval": true
      }
    },
    "resources": {
      "backend": {
        "requests": { "memory": "256Mi", "cpu": "100m" },
        "limits": { "memory": "512Mi", "cpu": "500m" }
      }
    }
  }
}

DevSecOps Settings

Security scanning configuration:

{
  "devsecops": {
    "scanning": {
      "sast": {
        "enabled": true,
        "tool": "semgrep",
        "fail_on": ["high", "critical"]
      },
      "dependency_scan": {
        "enabled": true,
        "fail_on": ["high", "critical"]
      },
      "secrets_scan": {
        "enabled": true,
        "fail_on_detection": true
      }
    }
  }
}

Brownfield Analysis

Configure codebase analysis for existing projects:

{
  "brownfield_analysis": {
    "enabled": true,
    "scan_paths": ["backend/", "frontend/", "openshift/"],
    "exclude_paths": ["node_modules/", "__pycache__/"],
    "generate_docs": ["PRD", "CLAUDE", "ARCHITECTURE", "API"],
    "use_ai_inference": true,
    "analyze_git_history": true
  }
}

Custom Commands

Define shortcuts for common operations:

{
  "custom_commands": {
    "start_backend": "cd backend && uv run uvicorn app.main:app --reload",
    "start_frontend": "cd frontend && npm run dev",
    "deploy_dev": "oc apply -k openshift/overlays/dev"
  }
}

Customizing for Your Project

  1. Copy template settings:
cp .claude/config/settings.json .claude/config/settings.local.json
  1. Update environment variables:
{
  "env": {
    "PROJECT_ROOT": ".",
    "GITLAB_TOKEN": "${GITLAB_PERSONAL_ACCESS_TOKEN}",
    "DATABASE_URL": "postgresql://localhost:5432/myapp"
  }
}
  1. Configure your tech stack:
{
  "validation": {
    "linting": {
      "backend": {
        "command": "cd backend && poetry run pylint src/"
      }
    }
  }
}
  1. Set security thresholds:
{
  "devsecops": {
    "scanning": {
      "dependency_scan": {
        "fail_on": ["critical"]  // Only block on critical
      }
    }
  }
}

Settings Best Practices

  1. Use environment variables for secrets:
{
  "env": {
    "GITLAB_TOKEN": "${GITLAB_PERSONAL_ACCESS_TOKEN}"
  }
}
  1. Configure per environment:

    • settings.json - Template/defaults
    • settings.local.json - Local overrides (gitignored)
    • settings.prod.json - Production config
  2. Set appropriate thresholds:

    • Coverage: 80% for unit, 70% for integration
    • Security: Fail on critical/high only
    • Performance: Adjust timeouts for CI/local
  3. Enable only what you need:

{
  "devsecops": {
    "enabled": true
  },
  "notifications": {
    "enabled": false  // Disable if not using
  }
}

Reference Documentation

Structure

.claude/
├── PRD.md                              # Product requirements
├── settings.json                       # Claude configuration
└── reference/                          # Best practices docs
    ├── backend/
    │   └── fastapi-best-practices.md  # Backend patterns
    ├── postgres-best-practices.md     # Database
    ├── frontend/
    │   ├── react-frontend-best-practices.md  # React + TypeScript
    │   └── flutter-best-practices.md         # Flutter + Dart
    ├── security/
    │   └── cyberark-iam-guide.md      # CyberArk IAM for Givaudan
    ├── testing-and-logging.md         # Testing
    └── deployment-best-practices.md   # Deployment

Creating Reference Docs

Purpose: Teach the AI your project's patterns and conventions.

When to create:

  • Starting a new project
  • Onboarding to an existing codebase
  • After establishing patterns

What to include:

  1. Technology Overview

    # FastAPI Best Practices
    
    ## When to Use
    - High-performance async APIs
    - Type-safe Python
    - Auto-generated OpenAPI docs
  2. Code Patterns

    ## Router Pattern
    
    ```python
    from fastapi import APIRouter, Depends
    
    router = APIRouter(prefix="/items", tags=["items"])
    
    @router.get("/", response_model=ItemListResponse)
    def list_items(db: Session = Depends(get_db)):
        ...
    
    
  3. Common Gotchas

    ## Gotchas
    
    - Always use `Depends(get_db)` for database sessions
    - Enable foreign keys in SQLite: `PRAGMA foreign_keys=ON`
    - Use `Mapped[type]` for SQLAlchemy 2.0 models
  4. Examples

    ## Examples
    
    ### Creating an endpoint
    ...
    
    ### Adding validation
    ...

MCP Servers

What are MCP Servers?

Model Context Protocol servers extend Claude's capabilities by providing access to external tools and data sources.

Available MCP Servers

Playwright MCP

Purpose: Browser automation and E2E testing

Installation:

claude mcp add playwright npx @playwright/mcp@latest

Usage:

"Use Playwright to test the login flow"
→ AI uses Playwright MCP to control browser

Capabilities:

  • Navigate pages
  • Click elements
  • Fill forms
  • Take screenshots
  • Run assertions

GitLab MCP

Purpose: Issue tracking, MR (Merge Request) management, repository operations

Installation:

claude mcp add gitlab npx @modelcontextprotocol/server-gitlab

Setup:

export GITLAB_TOKEN=glpat-your_token_here
export GITLAB_URL=https://gitlab.com  # or your self-hosted GitLab instance

Usage:

"Create a GitLab issue for the authentication bug"
"List all open merge requests"
"Close issue #42 with a comment"
"Assign MR !15 to @username"

PostgreSQL MCP

Purpose: Database queries, schema inspection, migrations

Installation:

claude mcp add postgres npx @modelcontextprotocol/server-postgres

Setup:

export DATABASE_URL=postgresql://user:password@localhost:5432/mydb

Usage:

"Show me the schema for the items table"
"Run a query to find items with counts > 30"
"Generate a migration to add email column to users"

Custom MCP Servers

You can create custom MCP servers for your specific needs:

// custom-mcp-server.ts
import { MCPServer } from '@modelcontextprotocol/sdk';

const server = new MCPServer({
  name: 'custom-tools',
  version: '1.0.0',
  tools: [
    {
      name: 'deploy_to_production',
      description: 'Deploy application to production',
      inputSchema: { /* ... */ },
      handler: async (params) => {
        // Deployment logic
      }
    }
  ]
});

Testing Strategy

Testing Pyramid

         /\
        /  \  ← 10% E2E Tests (agent-browser)
       /────\
      /      \  ← 20% Integration Tests
     /────────\
    /          \  ← 70% Unit Tests
   /────────────\

Unit Tests

Purpose: Test individual functions and components

Tools: pytest (Python), vitest (JavaScript)

Example:

# tests/unit/test_streak.py
def test_calculate_streak_consecutive_days():
    completions = [
        Completion(completed_date="2025-01-15", status="completed"),
        Completion(completed_date="2025-01-14", status="completed"),
        Completion(completed_date="2025-01-13", status="completed"),
    ]
    assert calculate_streak(completions, date(2025, 1, 15)) == 3

When to use:

  • Testing business logic
  • Testing utilities and helpers
  • TDD (test-driven development)

Integration Tests

Purpose: Test API endpoints with real database

Tools: pytest + TestClient, vitest + fetch

Example:

# tests/integration/test_api_items.py
def test_create_item_returns_201(client):
    response = client.post("/api/items", json={"name": "Sample Task"})
    assert response.status_code == 201
    assert response.json()["name"] == "Sample Task"

When to use:

  • Testing API contracts
  • Testing database operations
  • Testing error handling

E2E Tests (agent-browser)

Purpose: Test full user journeys

Tools: agent-browser

Installation:

npm install -g agent-browser

Example:

// tests/e2e/item-flow.test.js
describe('Item Creation Flow', () => {
  test('User can create and complete an item', async () => {
    await agentBrowser.test(`
      1. Navigate to localhost:5173
      2. Click "Add Item" button
      3. Fill in "Sample Task" as item name
      4. Fill in "Complete sample workflow" as description
      5. Select green color
      6. Click "Save"
      7. Verify "Sample Task" appears in item list
      8. Click "Complete" button next to "Sample Task"
      9. Verify item shows "Completed today"
      10. Verify counter shows "1"
    `);
  });
});

When to use:

  • Testing critical user journeys
  • Visual regression testing
  • UAT automation
  • Cross-browser testing

Running Tests

# Unit tests
cd backend && uv run pytest tests/unit/ -v

# Integration tests
cd backend && uv run pytest tests/integration/ -v

# E2E tests with agent-browser
agent-browser test tests/e2e/

# All tests via validation skill
/validation:validate

# Regression testing
/regression-test

Best Practices

1. Always Prime Before Planning

Don't:

You: "Implement user authentication"

Do:

You: /core_piv_loop:prime
Agent: Context loaded ✓
You: /core_piv_loop:plan-feature "Implement user authentication"

Why: Agent needs context to create accurate plans.


2. Review Plans Before Execution

Don't:

Agent: Plan created. Executing now...

Do:

Agent: Plan created: .claude/plans/auth.md
       Please review and approve.
You: [Reviews plan]
You: Looks good, proceed
You: /core_piv_loop:execute

Why: Plans might make assumptions that need correction.


3. Validate After Every Feature

Don't:

Agent: Implementation complete!
You: /commit

Do:

Agent: Implementation complete!
You: /validation:validate
Agent: All tests passed ✓
You: /commit

Why: Catch issues early before they compound.


4. Use Reference Docs Consistently

Don't:

# Inconsistent patterns across codebase
# Some endpoints use Depends(get_db), others use SessionLocal()

Do:

# Document in .claude/reference/reference/backend/fastapi-best-practices.md
## Database Sessions

Always use dependency injection:
```python
@router.get("/")
def endpoint(db: Session = Depends(get_db)):
    ...

**Why**: Agents will follow documented patterns consistently.

---

### 5. Leverage agent-browser for UI Testing

❌ **Don't**:
```python
# Manual testing after every UI change
# OR complex Selenium scripts

Do:

# Natural language test scenarios
/ui-test

# Or specific tests
agent-browser test "
  Test login with invalid credentials shows error message
"

Why: Faster, more maintainable, easier to write.


6. Use RCA Before Fixing Bugs

Don't:

You: "Fix the bug in issue #42"
Agent: [Randomly modifies code hoping to fix it]

Do:

You: /gitlab_bug_fix:rca 42
Agent: RCA document created ✓
You: [Reviews root cause]
You: /gitlab_bug_fix:implement-fix 42

Why: Systematic analysis prevents band-aid fixes.


7. Commit Atomically and Often

Don't:

[Implements 5 features]
You: /commit
Agent: "feat: add multiple features"

Do:

[Implements feature 1]
You: /commit
Agent: "feat(auth): add login endpoint"

[Implements feature 2]
You: /commit
Agent: "feat(auth): add logout endpoint"

Why: Easier to review, revert, and understand history.


8. Use Appropriate Agent Thoroughness

Don't:

"Explore the entire codebase to find where we use red color"
→ Spawns very thorough Explore agent (slow)

Do:

"Use quick Explore agent to find CSS files with red color"
→ Spawns quick Explore agent (fast)

Why: Balance speed and thoroughness based on need.


Example Project

Project: Template Project

A demonstration of the AI Development Framework in action.

Tech Stack (configurable):

  • Backend: Python 3.11, FastAPI, SQLAlchemy, PostgreSQL
  • Frontend:
    • Web: React 18, Vite, TanStack Query, Tailwind CSS
    • Mobile/Cross-platform: Flutter 3.x, Dart, Riverpod
  • Testing: pytest, vitest/flutter_test, agent-browser

Quick Start

# 1. Clone
git clone https://github.com/coleam00/template-project
cd template-project

# 2. Install agent-browser
npm install -g agent-browser

# 3. Prime Claude
/core_piv_loop:prime

# 4. Start servers
/init-project

# 5. Run validation
/validation:validate

# 6. Run UI tests
/ui-test

Project Structure

template-project/
├── CLAUDE.md                               # 📘 Complete framework guide (start here!)
├── README.md                               # Project documentation
├── .claude/
│   ├── config/                             # Configuration
│   │   └── settings.json                   # Claude settings
│   ├── docs/                               # Core documentation
│   │   ├── PRD.md                          # Product requirements
│   │   ├── CLAUDE_BEST_PRACTICES_2026.md   # Current best practices (Jan 2026)
│   │   ├── CLAUDE_COMMANDS.md              # Complete skills reference & guide
│   │   └── ARCHITECTURE.md                 # System architecture (optional)
│   ├── reference/                          # Best practices (organized by domain)
│   │   ├── backend/                        # Backend patterns
│   │   │   └── postgres-best-practices.md
│   │   ├── frontend/                       # Frontend patterns
│   │   │   ├── react-best-practices.md
│   │   │   ├── flutter-best-practices.md
│   │   │   └── vuejs-best-practices.md
│   │   ├── devops/                         # DevOps patterns
│   │   │   └── deployment-best-practices.md
│   │   ├── gitlab/                         # GitLab best practices
│   │   │   └── gitlab-best-practices.md
│   │   └── testing/                        # Testing patterns
│   │       └── testing-and-logging.md
│   ├── skills/                             # Command definitions (numbered by priority)
│   │   ├── 01-piv-loop/                    # Core workflow (highest priority)
│   │   │   ├── prime.md
│   │   │   ├── plan-feature.md
│   │   │   └── execute.md
│   │   ├── 02-validation/                  # Quality checks
│   │   │   ├── validate.md
│   │   │   ├── code-review.md
│   │   │   ├── code-review-fix.md
│   │   │   ├── execution-report.md
│   │   │   └── system-review.md
│   │   ├── 03-bug-fixing/                  # Problem solving
│   │   │   ├── rca.md
│   │   │   └── implement-fix.md
│   │   ├── 04-testing/                     # Testing skills (agent-browser)
│   │   │   ├── ui-test.md
│   │   │   ├── uat-test.md
│   │   │   └── regression-test.md
│   │   ├── 05-utilities/                   # Helper commands
│   │   │   ├── commit.md
│   │   │   ├── init-project.md
│   │   │   ├── create-prd.md
│   │   │   ├── analyze-brownfield.md
│   │   │   └── ui-ux-review.md
│   │   └── 06-devsecops/                   # Security scanning
│   │       ├── sast.md
│   │       ├── dependency-scan.md
│   │       ├── secrets-scan.md
│   │       ├── container-scan.md
│   │       └── compliance-check.md
│   ├── plans/                              # Implementation plans
│   │   ├── active/                         # Currently being worked on
│   │   ├── completed/                      # Finished implementations
│   │   └── templates/                      # Plan templates
│   ├── rca/                                # Root cause analysis
│   │   ├── active/                         # Open issues
│   │   └── resolved/                       # Fixed issues
│   ├── reports/                            # Generated reports
│   │   ├── validation/                     # Validation reports
│   │   ├── code-review/                    # Code review reports
│   │   └── execution/                      # Execution reports
│   └── scratch/                            # Temporary notes and working files
├── backend/
│   ├── app/
│   │   ├── main.py                         # FastAPI entry
│   │   ├── database.py                     # PostgreSQL pool
│   │   ├── models.py                       # SQLAlchemy models
│   │   ├── schemas.py                      # Pydantic schemas
│   │   └── routers/                        # API endpoints
│   └── tests/                              # pytest tests
├── frontend/
│   ├── src/
│   │   ├── features/                       # Feature modules
│   │   ├── components/                     # React components
│   │   └── lib/                            # Utilities
│   └── tests/                              # vitest tests
├── tests/
│   └── e2e/                                # agent-browser tests
└── README.md                               # This file

Development Workflow Example

Scenario: Add email notifications for user milestones

# 1. Prime
/core_piv_loop:prime
# Agent: Context loaded ✓

# 2. Plan
/core_piv_loop:plan-feature "Add email notifications when user reaches milestones"
# Agent: Plan created → .claude/plans/email-notifications.md

# 3. Review Plan
# [Review the generated plan]

# 4. Execute
/core_piv_loop:execute
# Agent: Step 1/6: Create email service... ✓
#        Step 2/6: Add SMTP configuration... ✓
#        ...
#        All steps complete ✓

# 5. Validate
/validation:validate
# Agent: ✓ Linting passed
#        ✓ Tests: 58/58 passed
#        ✓ Build successful

# 6. UI Test
/ui-test
# Agent: ✓ Email notification shows in UI
#        ✓ Settings page allows email toggle

# 7. Commit
/commit
# Agent: feat(notifications): add email alerts for 7-day streaks

Available Skills

# Core PIV Loop
/core_piv_loop:prime              # Load project context
/core_piv_loop:plan-feature       # Plan implementation
/core_piv_loop:execute            # Execute plan

# Validation
/validation:validate              # Full validation suite
/validation:code-review           # Technical code review
/validation:code-review-fix       # Fix review issues
/validation:execution-report      # Generate execution report
/validation:system-review         # Review system architecture

# Bug Fixing
/gitlab_bug_fix:rca              # Root cause analysis
/gitlab_bug_fix:implement-fix    # Implement fix

# Testing
/ui-test                         # UI functionality tests
/uat-test                        # User acceptance tests
/regression-test                 # Full regression suite

# DevSecOps
/devsecops:sast                  # Static application security testing
/devsecops:dependency-scan       # Vulnerable dependency detection
/devsecops:secrets-scan          # Hardcoded secrets detection
/devsecops:container-scan        # Container image vulnerabilities
/devsecops:compliance-check      # Standards compliance (OWASP, CIS)

# Utilities
/commit                          # Create git commit
/init-project                    # Setup and start servers
/create-prd                      # Generate PRD (greenfield)
/brownfield:analyze              # Analyze existing codebase (brownfield)
/ui-ux-review                    # Review UI/UX design and accessibility

Summary

The AI Development Framework provides:

  1. Structured workflow via the PIV loop (Prime → Implement → Validate)
  2. 23 specialized skills across 6 categories:
    • Core PIV Loop (3 skills)
    • Validation (5 skills)
    • Bug Fixing (2 skills)
    • Testing (3 skills)
    • DevSecOps (5 skills)
    • Utilities (5 skills)
  3. Brownfield & Greenfield support:
    • Analyze existing codebases and auto-generate documentation
    • Create new projects from scratch with PRD generation
  4. Comprehensive DevSecOps integration:
    • SAST (Static Application Security Testing)
    • Dependency vulnerability scanning
    • Secrets detection in code and git history
    • Container image security scanning
    • Compliance checking (OWASP, CIS, PCI-DSS, GDPR)
  5. OpenShift deployment with production-grade configurations
  6. UI/UX review with accessibility validation (WCAG 2.1 AA)
  7. GitLab integration via MCP for issues, MRs, and CI/CD
  8. agent-browser for AI-driven UI testing without Selenium code
  9. Reference documentation organized by domain (backend, frontend, devops, gitlab, testing)
  10. 604 lines of configuration for complete framework customization
  11. Specialized agents for different task types (General-Purpose, Plan, Explore, Bash)
  12. Comprehensive validation at every step with automated reporting

Key Principles:

  • Plan before implementing - Avoid coding in the dark
  • Validate continuously - Catch issues early
  • Document patterns - Teach the AI your conventions
  • Test thoroughly - Unit → Integration → E2E
  • Commit atomically - Clean, reviewable history

Getting Help:

  • 📘 Primary Guide: START HERE - CLAUDE.md - Complete guide to using Claude Code with this framework (architecture, workflows, all 23 skills, best practices, troubleshooting)
  • 📅 Best Practices 2026: .claude/docs/CLAUDE_BEST_PRACTICES_2026.md - Current best practices, what's new, modern workflows, security-first approach (Updated: Jan 29, 2026)
  • 📖 Commands Reference: .claude/docs/CLAUDE_COMMANDS.md - Complete guide to all 23 skills, workflows, and troubleshooting
  • Reference Documentation: Read best practices in .claude/reference/ (organized by domain: backend/, frontend/, devops/, gitlab/, testing/, security/)
  • 🔒 CyberArk IAM Guide: .claude/reference/security/cyberark-iam-guide.md - Givaudan-specific guide for secrets management, credential vault access, and secure application development
  • Example Plans: Review completed plans in .claude/plans/completed/
  • RCA Documents: Check root cause analyses in .claude/rca/
  • Reports: View validation and security reports in .claude/reports/
  • Configuration: Review comprehensive settings in .claude/config/settings.json (604 lines)
  • Skills Documentation: Check .claude/skills/ for individual skill definitions (23 skills across 6 categories)
  • Brownfield Projects: Use /brownfield:analyze to generate documentation from existing code
  • Security: Run DevSecOps scans before deployment (SAST, dependency, secrets, container, compliance)
  • UI/UX: Use /ui-ux-review to validate design and accessibility
  • Quick Commands:
    • /validation:validate - When in doubt about code quality
    • /core_piv_loop:prime - To refresh context
    • /devsecops:sast - To check for security vulnerabilities
    • /ui-test - To validate UI functionality

Additional Resources


Built with ❤️ using the AI Development Framework

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •