A comprehensive guide to AI-driven software development using Claude Code, agents, skills, and the PIV (Prime, Implement, Validate) methodology.
This framework demonstrates how to leverage AI agents for planning, implementing, testing, and validating software projects—transforming how you build applications from requirements to deployment.
📘 CLAUDE.md - Complete Framework Guide (Start Here!)
- Overview
- Quick Start
- Framework Hierarchy
- Framework Components
- The PIV Loop
- Getting Started
- Project Workflow
- Skills (Commands)
- DevSecOps
- Agents & Subagents
- Configuration Settings
- Reference Documentation
- MCP Servers
- Testing Strategy
- Best Practices
- Example Project
This AI Development Framework enables you to:
- Plan comprehensively with AI-assisted feature design and architecture
- Implement systematically using step-by-step execution plans
- Validate rigorously with automated testing, linting, and code review
- Secure proactively with integrated DevSecOps scanning and compliance checks
- Test thoroughly with AI-driven UI/UX testing using agent-browser
- Iterate intelligently with root cause analysis and systematic fixes
- Document automatically with brownfield codebase analysis and PRD generation
Human + AI Collaboration: You provide direction and review; AI handles implementation details, testing, validation, and security. The result is faster development with higher quality and built-in security.
- ✅ 23 Skills across 6 categories (PIV Loop, Validation, Bug Fixing, Testing, DevSecOps, Utilities)
- ✅ Brownfield Support - Reverse-engineer documentation from existing codebases
- ✅ DevSecOps Suite - SAST, dependency scanning, secrets detection, container scanning, compliance
- ✅ OpenShift Ready - Production deployment with comprehensive configs
- ✅ GitLab Integration - MCP server for issues, MRs, CI/CD
- ✅ UI/UX Review - Accessibility and design pattern validation
- ✅ Agent-Browser - AI-driven UI testing (no Selenium/Playwright code)
- ✅ Comprehensive Settings - 604 lines of configuration for complete customization
📘 Primary Guide: CLAUDE.md
Complete guide to using Claude Code with this framework:
- Framework overview & architecture
- Quick start guide (5 minutes)
- All 23 skills documented
- 5 complete workflows
- Technology stack reference
- Best practices & performance metrics
- Troubleshooting & tips
📅 Latest: Claude Code Best Practices (January 2026)
🆕 .claude/docs/CLAUDE_BEST_PRACTICES_2026.md
Current best practices as of January 29, 2026:
- What's new in 2026 (Sonnet 4.5, Opus 4.5, brownfield analysis)
- Modern workflows (85-92% faster development)
- Security-first approach (95% of vulnerabilities caught pre-commit)
- Model selection guide (40-60% cost savings)
- Migration guide from 2024-2025 practices
📖 Complete Commands Reference
.claude/docs/CLAUDE_COMMANDS.md
This document includes:
- All 23 framework skills with examples
- Common workflows (new feature, bug fix, deployment)
- Tips & tricks for efficient development
- Troubleshooting guide
- MCP server setup
- Quick reference card
# 🧠 Load project context
/core_piv_loop:prime
# 📋 Plan a new feature
/core_piv_loop:plan-feature "Add email notifications"
# ⚙️ Execute the plan
/core_piv_loop:execute
# ✅ Validate code
/validation:validate
# 🔒 Security scan
/devsecops:sast
# 📦 Commit changes
/commit
# 🏗️ Analyze existing codebase (brownfield)
/brownfield:analyzeSee CLAUDE_COMMANDS.md for detailed documentation of all skills.
PIV stands for Prime → Implement → Validate - the core methodology for systematic software development with AI assistance.
┌─────────────────────────────────────────┐
│ │
│ ┌────────┐ ┌───────────┐ ┌──────────┐
│ │ PRIME │ -> │ IMPLEMENT │ -> │ VALIDATE │
│ └────────┘ └───────────┘ └──────────┘
│ ^ |
│ └───────────────────────────────┘
│ (Loop/Iterate)
└─────────────────────────────────────────┘
Load context and understand what you're building
- Read project structure and conventions
- Load reference documentation
- Understand tech stack and patterns
- Identify critical files and dependencies
Command: /core_piv_loop:prime
What Prime Does:
- Detects project state (new, existing, or primed)
- Creates documentation (PRD, CLAUDE.md if missing)
- Analyzes structure using git and file system commands
- Reads core files:
- CLAUDE.md (project instructions)
- .claude/docs/PRD.md (requirements)
- README.md (overview)
- Reference docs in
.claude/reference/(backend/, frontend/, devops/, gitlab/, testing/)
- Identifies key files (entry points, configs, models)
- Checks current state (git status, recent commits, existing plans)
- Recommends next steps
Tools Used (No separate agents spawned):
- Bash: Git commands, file listing, directory tree
- Read: Documentation and source files
- Glob: Pattern-based file discovery
- Executed directly by Claude (not via subagents)
Example:
/core_piv_loop:prime
# Agent: Analyzing project structure...
# ✓ Found CLAUDE.md and PRD.md
# ✓ Read 5 reference documents
# ✓ Analyzed 42 source files
# ✓ Reviewed recent commits
# Context loaded ✓
# Ready to work on: Template Project
#
# Next steps:
# 1. Continue implementing auth feature from .claude/plans/active/auth.md
# 2. Or run /plan-feature for new featureExecute plans systematically, step by step
Has two sub-phases:
- Plan: Design the implementation approach (before coding)
- Execute: Implement the plan step-by-step
Commands:
/core_piv_loop:plan-feature "Feature description"/core_piv_loop:execute
2a. Plan Sub-Phase
What Plan Does:
- Understands the feature (problem, value, complexity)
- Gathers codebase intelligence:
- Analyzes project structure and patterns
- Searches for similar implementations
- Identifies dependencies and integration points
- Reviews testing patterns
- Researches externally (documentation, best practices)
- Designs implementation (step-by-step plan)
- Creates validation strategy (tests, checks)
- Writes plan document to
.claude/plans/ - Requests your approval before execution
Agents Used:
- Plan Agent (main): Read-only analysis, no code changes
- Explore Agent (spawned): Fast codebase pattern searches
- General-Purpose Agent (spawned): External research, documentation
- Can spawn multiple agents in parallel for comprehensive analysis
2b. Execute Sub-Phase
What Execute Does:
- Reads approved plan from
.claude/plans/ - Implements step-by-step:
- Creates/modifies files per plan
- Follows documented patterns
- Adds tests as specified
- Validates each step before proceeding
- Handles errors and rollbacks if needed
- Creates git commits for logical chunks
Agents Used:
- General-Purpose Agent (main): Full toolkit access (Read, Write, Edit)
- Bash Agent (spawned): Running tests, git operations
- Executes sequentially, validating each step
Example:
# Plan first (spawns multiple agents for analysis)
/core_piv_loop:plan-feature "Add user authentication"
# Agent: Spawning Explore agent for pattern analysis...
# Spawning research agent for JWT best practices...
# Analyzing 23 relevant files...
# Plan created → .claude/plans/auth.md
#
# Please review and approve before execution.
# Review plan, then execute
/core_piv_loop:execute
# Agent: Reading plan: auth.md
# Step 1/8: Create User model... ✓
# Step 2/8: Implement JWT utilities... ✓
# Running tests after step 2... ✓Ensure code quality, correctness, and no regressions
- Linting and formatting
- Type checking
- Unit tests
- Integration tests
- UI tests (with agent-browser)
- Build verification
Command: /validation:validate
What Validate Does:
- Linting: Runs ruff, eslint, prettier
- Type checking: mypy (Python), tsc (TypeScript)
- Unit tests: pytest, vitest with coverage reports
- Integration tests: API endpoints, database operations
- UI tests: agent-browser scenarios (if configured)
- Build: Frontend and backend compilation
- Generates report: Summary of all checks
Agents Used:
- Bash Agent (main): Executes test commands, linters, builds
- General-Purpose Agent: Analyzes results, generates report
- Can run multiple test suites in parallel for speed
Related Validation Skills:
/validation:code-review- AI code review (uses General-Purpose Agent)/validation:code-review-fix- Auto-fix issues (uses General-Purpose Agent + Edit tool)/ui-test- UI testing only (uses Bash Agent for agent-browser)/uat-test- User acceptance tests (uses Bash Agent for agent-browser)/regression-test- Full regression suite (spawns multiple Bash Agents in parallel)
Example:
/validation:validate
# Agent: Running validation suite...
# [Bash] Running ruff linter... ✓
# [Bash] Running pytest (parallel)... ✓
# [Bash] Running vitest (parallel)... ✓
# [Bash] Building frontend... ✓
#
# ✓ Linting passed (0 errors)
# ✓ Type checking passed
# ✓ Unit tests: 52/52 passed (95% coverage)
# ✓ Integration tests: 15/15 passed
# ✓ Build successful
#
# All validation checks passed! ✓Traditional Approach ❌:
Code → Test → Debug → More Code → Test → Debug → ...
(Chaotic, reactive, error-prone)
PIV Approach ✅:
Prime → Plan → Execute → Validate → (Iterate)
(Systematic, proactive, quality-focused)
| Benefit | How PIV Helps |
|---|---|
| Fewer Bugs | Validation catches issues early |
| Better Design | Planning phase prevents rushed implementations |
| Faster Development | AI handles implementation details while you guide |
| Higher Quality | Systematic validation ensures standards |
| Easier Debugging | Clear context from Prime phase |
| Better Documentation | Plans serve as implementation docs |
# PRIME: Load project context
/core_piv_loop:prime
# IMPLEMENT: Plan feature
/core_piv_loop:plan-feature "Add email notifications"
# Review plan → Approve
# IMPLEMENT: Execute
/core_piv_loop:execute
# VALIDATE: Run full validation
/validation:validate
# VALIDATE: UI testing
/ui-test
# If issues found:
/validation:code-review-fix
# Commit when all pass
/commitPIV is a loop, not a linear process:
- Prime → Understand current state
- Implement → Make changes
- Validate → Verify changes
- Loop back to Prime → Update context for next change
Each iteration builds on the previous, maintaining context and quality throughout development
Here's a quick reference showing which agents are used in each phase:
| Phase | Primary Agent | Spawned Agents | Tools Used |
|---|---|---|---|
| Prime | Claude (direct) | None | Bash (git, ls, tree) Read (docs, code) Glob (file patterns) |
| Plan | Plan Agent | Explore Agent General-Purpose Agent (parallel execution) |
Read (codebase analysis) Grep (pattern search) WebFetch (research) |
| Execute | General-Purpose Agent | Bash Agent | Read, Write, Edit Bash (tests, git) |
| Validate | Bash Agent | Multiple Bash Agents (parallel testing) |
Bash (test runners) Read (reports) |
Key Points:
- Prime runs directly without spawning subagents (fastest)
- Plan spawns multiple agents in parallel for comprehensive analysis
- Execute runs sequentially with validation at each step
- Validate can run test suites in parallel for speed
Agent Capabilities Summary:
| Agent Type | Can Read | Can Write | Can Execute | Best For |
|---|---|---|---|---|
| General-Purpose | ✅ | ✅ | ✅ | Implementation, complex tasks |
| Plan | ✅ | ❌ | ❌ | Read-only analysis, planning |
| Explore | ✅ | ❌ | ❌ | Fast codebase searches |
| Bash | Limited | ❌ | ✅ | Commands, tests, git operations |
This section provides a complete hierarchy of processes, commands, skills, references, agents, and when to use each component.
┌─────────────────────────────────────────────────────────┐
│ AI DEVELOPMENT FRAMEWORK │
│ │
│ Start Here → What do I need to do? │
└─────────────────────────────────────────────────────────┘
|
┌─────────────────┼─────────────────┐
| | |
┌───▼───┐ ┌───▼────┐ ┌───▼──────┐
│ PRIME │ │IMPLEMENT│ │ VALIDATE │
└───┬───┘ └───┬────┘ └───┬──────┘
| | |
└─────────────────┴─────────────────┘
PIV LOOP
📋 AI DEVELOPMENT FRAMEWORK
│
├── 🔄 PROCESSES (PIV Loop)
│ │
│ ├── 1️⃣ PRIME
│ │ ├── Purpose: Load context & understand project
│ │ ├── Command: /core_piv_loop:prime
│ │ ├── Agents: Claude (direct), No subagents
│ │ ├── Tools: Bash, Read, Glob
│ │ ├── References Read:
│ │ │ ├── CLAUDE.md (project instructions)
│ │ │ ├── .claude/docs/PRD.md (requirements)
│ │ │ ├── README.md (overview)
│ │ │ └── .claude/reference/* (all best practices)
│ │ └── When to Use:
│ │ ├── Starting work on project
│ │ ├── After being away from project
│ │ ├── After context window cleared
│ │ └── Before planning new features
│ │
│ ├── 2️⃣ IMPLEMENT
│ │ │
│ │ ├── 2a. PLAN Phase
│ │ │ ├── Purpose: Design implementation approach
│ │ │ ├── Command: /core_piv_loop:plan-feature "description"
│ │ │ ├── Agents:
│ │ │ │ ├── Plan Agent (primary, read-only)
│ │ │ │ ├── Explore Agent (spawned, parallel)
│ │ │ │ └── General-Purpose Agent (spawned, parallel)
│ │ │ ├── Tools: Read, Grep, Glob, WebFetch
│ │ │ ├── References Read:
│ │ │ │ ├── CLAUDE.md (conventions)
│ │ │ │ ├── Similar code in codebase
│ │ │ │ ├── .claude/reference/* (tech-specific patterns)
│ │ │ │ └── External docs (via WebFetch)
│ │ │ ├── Output: .claude/plans/feature-name.md
│ │ │ └── When to Use:
│ │ │ ├── Before any non-trivial feature
│ │ │ ├── When multiple approaches exist
│ │ │ ├── When architecture decisions needed
│ │ │ └── After Prime, before Execute
│ │ │
│ │ └── 2b. EXECUTE Phase
│ │ ├── Purpose: Implement approved plan
│ │ ├── Command: /core_piv_loop:execute
│ │ ├── Agents:
│ │ │ ├── General-Purpose Agent (primary)
│ │ │ └── Bash Agent (spawned for tests/git)
│ │ ├── Tools: Read, Write, Edit, Bash
│ │ ├── References Read:
│ │ │ ├── .claude/plans/feature-name.md (the plan)
│ │ │ └── .claude/reference/* (as needed)
│ │ └── When to Use:
│ │ ├── After plan approval
│ │ └── Ready to write code
│ │
│ └── 3️⃣ VALIDATE
│ ├── Purpose: Ensure quality & no regressions
│ ├── Commands:
│ │ ├── /validation:validate (full suite)
│ │ ├── /validation:code-review (AI review)
│ │ ├── /validation:code-review-fix (auto-fix)
│ │ ├── /ui-test (UI testing)
│ │ ├── /uat-test (user acceptance)
│ │ └── /regression-test (full regression)
│ ├── Agents:
│ │ ├── Bash Agent (primary, for tests)
│ │ ├── Multiple Bash Agents (parallel test suites)
│ │ └── General-Purpose (for code review)
│ ├── Tools: Bash (test runners), Read (reports)
│ ├── References Used:
│ │ ├── Test results
│ │ ├── Linter configs
│ │ └── Coverage reports
│ └── When to Use:
│ ├── After implementing features
│ ├── Before creating PRs
│ ├── After fixing bugs
│ └── Before deployment
│
├── 🎯 SKILLS / COMMANDS
│ │
│ ├── Core PIV Loop
│ │ ├── /core_piv_loop:prime
│ │ ├── /core_piv_loop:plan-feature
│ │ └── /core_piv_loop:execute
│ │
│ ├── Validation
│ │ ├── /validation:validate
│ │ ├── /validation:code-review
│ │ └── /validation:code-review-fix
│ │
│ ├── Testing
│ │ ├── /ui-test (agent-browser UI tests)
│ │ ├── /uat-test (user acceptance tests)
│ │ └── /regression-test (full suite)
│ │
│ ├── Bug Fixing
│ │ ├── /gitlab_bug_fix:rca (root cause analysis)
│ │ └── /gitlab_bug_fix:implement-fix
│ │
│ └── Utilities
│ ├── /commit (atomic git commits)
│ ├── /init-project (setup & start)
│ └── /create-prd (generate PRD)
│
├── 🤖 AGENTS
│ │
│ ├── General-Purpose Agent
│ │ ├── Capabilities: Read ✅, Write ✅, Execute ✅
│ │ ├── Used In: Execute phase, code review fixes
│ │ ├── When to Use: Implementation, complex multi-step tasks
│ │ └── Can Spawn: Any other agent
│ │
│ ├── Plan Agent
│ │ ├── Capabilities: Read ✅, Write ❌, Execute ❌
│ │ ├── Used In: Plan phase
│ │ ├── When to Use: Read-only analysis, architecture design
│ │ └── Can Spawn: Explore, General-Purpose (for research)
│ │
│ ├── Explore Agent
│ │ ├── Capabilities: Read ✅ (fast), Write ❌, Execute ❌
│ │ ├── Used In: Plan phase (pattern searching)
│ │ ├── When to Use: Fast codebase searches, pattern finding
│ │ ├── Thoroughness Levels:
│ │ │ ├── quick (basic search)
│ │ │ ├── medium (moderate exploration)
│ │ │ └── very thorough (comprehensive)
│ │ └── Can Spawn: None
│ │
│ └── Bash Agent
│ ├── Capabilities: Read (limited), Write ❌, Execute ✅
│ ├── Used In: Validate phase, Execute phase (for tests/git)
│ ├── When to Use: Running commands, tests, git operations
│ └── Can Spawn: None (pure command execution)
│
├── 📚 REFERENCES
│ │
│ ├── Project Documentation
│ │ ├── CLAUDE.md
│ │ │ ├── Purpose: AI instructions, conventions, commands
│ │ │ ├── Read By: Prime, Plan agents
│ │ │ └── When Created: During prime if missing
│ │ │
│ │ ├── .claude/docs/PRD.md
│ │ │ ├── Purpose: Product requirements, features, acceptance criteria
│ │ │ ├── Read By: Prime, Plan agents
│ │ │ └── When Created: /create-prd or during prime
│ │ │
│ │ └── README.md
│ │ ├── Purpose: Project overview, setup instructions
│ │ └── Read By: Prime agent
│ │
│ ├── Best Practices (.claude/reference/)
│ │ ├── reference/backend/fastapi-best-practices.md
│ │ │ ├── Purpose: API patterns, routing, schemas
│ │ │ ├── Read By: Plan, Execute agents
│ │ │ └── When to Use: Building FastAPI endpoints
│ │ │
│ │ ├── postgres-best-practices.md
│ │ │ ├── Purpose: Database setup, pooling, queries
│ │ │ ├── Read By: Plan, Execute agents
│ │ │ └── When to Use: Database operations
│ │ │
│ │ ├── react-frontend-best-practices.md
│ │ │ ├── Purpose: Components, hooks, state management
│ │ │ ├── Read By: Plan, Execute agents
│ │ │ └── When to Use: Building React web components
│ │ │
│ │ ├── flutter-best-practices.md
│ │ │ ├── Purpose: Widgets, Riverpod state, Clean Architecture
│ │ │ ├── Read By: Plan, Execute agents
│ │ │ └── When to Use: Building Flutter mobile/cross-platform apps
│ │ │
│ │ ├── testing-and-logging.md
│ │ │ ├── Purpose: Test patterns, structlog setup
│ │ │ ├── Read By: Plan, Execute agents
│ │ │ └── When to Use: Writing tests, logging
│ │ │
│ │ └── deployment-best-practices.md
│ │ ├── Purpose: Docker, CI/CD, production setup
│ │ ├── Read By: Plan agent
│ │ └── When to Use: Deployment tasks
│ │
│ └── Implementation Plans (.claude/plans/)
│ ├── Purpose: Step-by-step implementation guides
│ ├── Created By: Plan agent
│ ├── Read By: Execute agent
│ └── Format: Markdown with phases, tasks, validation
│
└── 🔧 MCP SERVERS (External Capabilities)
│
├── Playwright MCP
│ ├── Purpose: Browser automation, E2E testing
│ ├── Used By: Validate phase (/ui-test, /uat-test)
│ └── When to Use: Visual testing, user flows
│
├── GitLab MCP
│ ├── Purpose: Issue tracking, MR management
│ ├── Used By: Bug fixing skills
│ └── When to Use: /gitlab_bug_fix:rca, issue management
│
├── PostgreSQL MCP
│ ├── Purpose: Database queries, schema inspection
│ ├── Used By: Plan, Execute agents
│ └── When to Use: Database analysis, migrations
│
└── agent-browser
├── Purpose: AI-driven UI testing
├── Used By: /ui-test, /uat-test, /regression-test
└── When to Use: Natural language test scenarios
┌─────────────────────────────────────────┐
│ What do I need to do? │
└───────────────┬─────────────────────────┘
|
┌───────────┼───────────┬─────────────┬───────────┐
| | | | |
v v v v v
┌───────┐ ┌────────┐ ┌────────┐ ┌─────────┐ ┌────────┐
│Starting│ │Implement│ │ Fix a │ │ Test │ │Commit │
│Project │ │Feature │ │ Bug │ │Changes │ │Changes │
└───┬───┘ └────┬───┘ └───┬────┘ └────┬────┘ └───┬────┘
| | | | |
v v v v v
/prime /plan- /gitlab_ /validation /commit
feature bug_fix:rca :validate
| |
v v
/execute /gitlab_bug_
fix:implement
|
v
/validation
:validate
- ✅ Starting work on the project
- ✅ Returning after being away
- ✅ Context window has been cleared
- ✅ Before planning any feature
- ✅ After major git pull/merge
- ✅ Before implementing non-trivial features
- ✅ Multiple implementation approaches exist
- ✅ Architectural decisions are needed
- ✅ You need a roadmap before coding
- ❌ NOT for simple bug fixes (use RCA instead)
- ✅ After reviewing and approving plan
- ✅ Ready to write code
- ✅ Plan document exists in
.claude/plans/ - ❌ NOT before creating a plan
- ✅ After implementing features
- ✅ Before creating pull/merge requests
- ✅ After major refactoring
- ✅ Before deployment
- ✅ When in doubt about code quality
- ✅ After implementing complex logic
- ✅ Before committing important changes
- ✅ Want AI second opinion on code
- ✅ Checking for security issues
- ✅ After UI changes
- ✅ Before releases
- ✅ After major refactoring
- ✅ Weekly/nightly in CI/CD
- ✅ You have a bug to investigate
- ✅ Before implementing the fix
- ✅ Root cause is unclear
- ❌ NOT for simple typo fixes
- ✅ After reviewing RCA document
- ✅ Root cause is understood
- ✅ Ready to implement the fix
- ✅ Code is in committable state
- ✅ After validation passes
- ✅ Logical chunk of work is complete
- ✅ Want AI to generate commit message
Parallel Execution (faster):
Plan Phase:
├── Explore Agent ─────┐
├── General Agent ─────┤ → Run simultaneously
└── Pattern Search ────┘
Validate Phase:
├── Unit Tests ────────┐
├── Integration Tests ─┤ → Run simultaneously
└── Linting ───────────┘
Sequential Execution (ordered):
Execute Phase:
Step 1 → Validate → Step 2 → Validate → Step 3
(Can't proceed to step 2 until step 1 validates)
Prime Phase:
Detect → Read Docs → Analyze → Report
(Each phase depends on previous)
┌──────────────────────────────────────┐
│ Claude (You interact here) │
└────────────────┬─────────────────────┘
|
┌────────────┼────────────┬─────────────┐
v v v v
┌────────┐ ┌────────┐ ┌──────────┐ ┌──────┐
│ Prime │ │ Plan │ │ Execute │ │Valid-│
│(Direct)│ │ Agent │ │ Agent │ │ ate │
└────────┘ └───┬────┘ └────┬─────┘ └──┬───┘
| | |
┌───────┴────┐ v v
v v ┌──────┐ ┌────────┐
┌────────┐ ┌────────┐│ Bash │ │Multiple│
│Explore │ │General ││Agent │ │ Bash │
│ Agent │ │ Agent │└──────┘ │ Agents │
└────────┘ └────────┘ └────────┘
Key Rules:
- Prime = Direct execution (no spawning)
- Plan = Can spawn Explore + General (parallel)
- Execute = Spawns Bash for tests/git (sequential)
- Validate = Spawns multiple Bash (parallel tests)
Specialized AI assistants with specific capabilities:
| Agent Type | Purpose | When to Use |
|---|---|---|
| General-Purpose | Multi-step tasks, complex queries | Open-ended exploration, research |
| Plan | Architecture design, implementation planning | Before writing code for new features |
| Explore | Fast codebase exploration | Finding patterns, understanding structure |
| Bash | Command execution specialist | Git operations, running scripts |
Example:
# The AI will automatically spawn agents when needed
"Find all API endpoints in the codebase and explain their purpose"
# → Spawns Explore agent for fast codebase searchPre-configured workflows you can invoke with /command:
| Category | Skills | Purpose |
|---|---|---|
| Planning | /core_piv_loop:prime/core_piv_loop:plan-feature/core_piv_loop:execute |
Context loading, feature planning, execution |
| Validation | /validation:validate/validation:code-review/validation:code-review-fix |
Testing, linting, code quality |
| Bug Fixing | /gitlab_bug_fix:rca/gitlab_bug_fix:implement-fix |
Root cause analysis, systematic fixes |
| Utilities | /commit/init-project/create-prd |
Git operations, setup, documentation |
| Testing | /ui-test/uat-test/regression-test |
UI, UAT, regression testing with agent-browser |
Structured knowledge stored in .claude/reference/:
.claude/reference/
├── backend/
│ └── fastapi-best-practices.md # API patterns, routing, schemas
├── postgres-best-practices.md # Database setup, pooling, queries
├── frontend/
│ ├── react-frontend-best-practices.md # React: Components, hooks, state
│ └── flutter-best-practices.md # Flutter: Widgets, Riverpod, Clean Arch
├── testing-and-logging.md # Test patterns, structlog
└── deployment-best-practices.md # OpenShift, production, CI/CD
Purpose: Agents read these to understand your project's conventions and patterns before implementing.
Configure Claude Code behavior in .claude/settings.json:
{
"agents": {
"explore": { "max_turns": 20 },
"plan": { "model": "opus" }
},
"skills": {
"validation:validate": { "auto_fix": true }
},
"hooks": {
"pre_commit": "npm run lint && npm run test"
}
}External tools that extend Claude's capabilities:
| MCP Server | Purpose | Installation |
|---|---|---|
| Playwright | Browser automation, E2E testing | claude mcp add playwright npx @playwright/mcp@latest |
| GitLab | Issue tracking, MR management | claude mcp add gitlab npx @modelcontextprotocol/server-gitlab |
| PostgreSQL | Database queries, schema inspection | claude mcp add postgres npx @modelcontextprotocol/server-postgres |
| Filesystem | Advanced file operations | Built-in |
Automated UI testing tool for UAT, regression, and visual testing:
npm install -g agent-browser// Define test scenarios in natural language
agent-browser test "
1. Navigate to localhost:5173
2. Click 'Add Item' button
3. Fill in 'Task Name' as item name
4. Click 'Save'
5. Verify item appears in list
"The framework includes agent-browser skills for automated testing:
/ui-test: Run UI functionality tests/uat-test: User acceptance testing scenarios/regression-test: Full regression suite across features
Goal: Load context and understand what you're building.
When to use:
- Starting a new feature
- After being away from the project
- When context window is cleared
Command: /core_piv_loop:prime
What happens:
- Reads project structure and conventions
- Loads relevant reference documentation
- Understands tech stack and patterns
- Identifies critical files and dependencies
Output: Agent confirms understanding of project context.
Goal: Create a comprehensive implementation plan before writing code.
When to use:
- Before implementing any non-trivial feature
- When multiple approaches are possible
- When architectural decisions are needed
Command: /core_piv_loop:plan-feature "Add user authentication"
What happens:
- Analyzes requirements and existing codebase
- Identifies affected files and dependencies
- Researches best practices from reference docs
- Creates step-by-step implementation plan
- Requests your approval before proceeding
Output: Detailed plan in .claude/plans/feature-name.md
Example Plan Structure:
# Feature: User Authentication
## Overview
Add JWT-based authentication with login/logout endpoints.
## Critical Files
- backend/app/auth.py (new)
- backend/app/models.py (modify)
- frontend/src/features/auth/ (new)
## Implementation Steps
### Step 1: Create User model
- Add User table with SQLAlchemy
- Include password hashing with bcrypt
...
### Step 2: Implement JWT utilities
- Create token generation/validation
- Add middleware for protected routes
...Goal: Execute the plan systematically, step by step.
When to use:
- After plan approval
- When you're ready to write code
Command: /core_piv_loop:execute
What happens:
- Reads the approved plan
- Implements each step in order
- Validates each step before moving to the next
- Handles errors and rollbacks if needed
- Creates atomic git commits per logical chunk
Best Practices:
- Review changes as they happen
- Ask questions if implementation deviates from plan
- Test incrementally rather than at the end
Goal: Ensure code quality, correctness, and no regressions.
When to use:
- After implementing a feature
- Before creating a PR
- After fixing bugs
Command: /validation:validate
What happens:
- Linting: Runs code formatters and linters
- Type Checking: Verifies type correctness
- Unit Tests: Runs test suite with coverage
- Integration Tests: Tests API endpoints
- UI Tests: Runs agent-browser automation
- Build: Compiles frontend and backend
- Report: Generates validation summary
Example Output:
✓ Linting passed (0 errors)
✓ Type checking passed
✓ Unit tests: 45/45 passed (100% coverage)
✓ Integration tests: 12/12 passed
✓ UI tests: 8/8 scenarios passed
✓ Frontend build successful
✓ Backend build successful
All validation checks passed! ✓
# Required
- Python 3.11+
- Node.js 18+
- uv package manager: curl -LsSf https://astral.sh/uv/install.sh | sh
- Claude Code CLI: npm install -g @anthropics/claude-code
# Optional but recommended
- Docker Desktop
- PostgreSQL 16+
- agent-browser: npm install -g agent-browser# 1. Install Claude Code
npm install -g @anthropics/claude-code
# 2. Clone your project
git clone <your-repo>
cd <your-project>
# 3. Initialize with Claude
claude init
# 4. Install agent-browser for UI testing
npm install -g agent-browser
# 5. Add MCP servers
claude mcp add playwright npx @playwright/mcp@latest
claude mcp add gitlab npx @modelcontextprotocol/server-gitlab
# 6. Prime the agent with your project
/core_piv_loop:prime# 1. Create project structure
mkdir my-project && cd my-project
claude init
# 2. Create PRD (Product Requirements Document)
/create-prd
# Then describe your project in conversation
# 3. Initialize project structure
/init-project
# 4. Create reference documentation
mkdir -p .claude/reference
# Add your tech stack best practices docs# 1. Clone existing project
git clone <existing-repo>
cd <existing-project>
# 2. Copy .claude/ template
cp -r /path/to/template/.claude .
# 3. Analyze existing codebase and generate documentation
/brownfield:analyze
# This will generate:
# - .claude/docs/PRD.md (what the app does)
# - .claude/docs/CLAUDE.md (AI instructions)
# - .claude/docs/ARCHITECTURE.md (system design)
# - .claude/docs/API.md (endpoints)
# - .claude/docs/FEATURES.md (feature list)
# 4. Review and refine generated documentation
# AI provides ~90-95% accuracy, verify and add business context
# 5. Prime Claude with the analyzed project
/core_piv_loop:prime
# Claude now fully understands your existing codebase
# 6. Continue with normal workflow
/core_piv_loop:plan-feature "Add new feature"What Gets Analyzed:
- ✅ All source code (backend, frontend, infrastructure)
- ✅ API endpoints and routes
- ✅ Database models and relationships
- ✅ Component hierarchy
- ✅ Configuration files
- ✅ CI/CD pipelines
- ✅ Dependencies and tech stack
- ✅ Security implementations
- ✅ Git history (recent commits)
Generated Documentation Includes:
- Complete feature inventory
- API specification
- Database schema
- Architecture diagrams (textual)
- Coding conventions detected
- Development setup instructions
- Testing strategy
- Deployment configuration
# 1. Prime (if needed)
/core_piv_loop:prime
# 2. Plan the feature
/core_piv_loop:plan-feature "Feature description"
# Review the plan in .claude/plans/
# 3. Approve and execute
/core_piv_loop:execute
# 4. Validate
/validation:validate
# 5. UI Testing
/ui-test
# 6. Commit
/commit# 1. Create GitLab issue (if not exists)
# 2. Run root cause analysis
/gitlab_bug_fix:rca <issue-number>
# 3. Review RCA document
# 4. Implement fix
/gitlab_bug_fix:implement-fix <issue-number>
# 5. Validate
/validation:validate
# 6. Regression test
/regression-test
# 7. Commit and close issue
/commit# 1. Prime with current context
/core_piv_loop:prime
# 2. Plan refactoring
/core_piv_loop:plan-feature "Refactor auth module to use dependency injection"
# 3. Execute with caution
/core_piv_loop:execute
# 4. Comprehensive validation
/validation:validate
/regression-test
# 5. Code review
/validation:code-review
# 6. Fix any issues
/validation:code-review-fix
# 7. Commit
/commitPurpose: Load project context into agent's working memory.
When to use:
- Starting work on the project
- After context reset
- Before planning a feature
What it does:
- Reads
.claude/docs/PRD.md - Loads reference documentation
- Analyzes project structure
- Understands tech stack
Example:
You: /core_piv_loop:prime
Agent: Loading project context...
✓ Read PRD (598 lines)
✓ Loaded 5 reference documents
✓ Analyzed project structure
Ready to work on: Template Project
Purpose: Create comprehensive implementation plan.
When to use:
- Before any non-trivial implementation
- When multiple approaches exist
- When you need architectural guidance
Parameters: Feature description (string)
What it does:
- Analyzes requirements
- Explores codebase for relevant patterns
- Identifies affected files
- Creates step-by-step plan
- Requests approval
Example:
You: /core_piv_loop:plan-feature "Add email notifications for user milestones"
Agent: Planning feature...
- Analyzed 15 related files
- Reviewed email best practices
- Created 8-step implementation plan
Plan saved: .claude/plans/email-notifications.md
Ready for your review and approval.
Purpose: Execute approved implementation plan.
When to use:
- After approving a plan
- Ready to implement
What it does:
- Reads plan from
.claude/plans/ - Implements step by step
- Validates each step
- Creates git commits
- Handles errors gracefully
Example:
You: /core_piv_loop:execute
Agent: Executing plan: email-notifications.md
Step 1/8: Create EmailService class... ✓
Step 2/8: Add notification preferences to User model... ✓
Step 3/8: Implement streak check logic... ✓
...
All steps completed successfully!
Purpose: Run full validation suite.
When to use:
- After implementing features
- Before creating PRs
- Before deployment
What it does:
1. Linting (ruff, eslint)
2. Type checking (mypy, TypeScript)
3. Unit tests (pytest, vitest)
4. Integration tests
5. Coverage report
6. Frontend build
7. Backend build
Example:
You: /validation:validate
Agent: Running comprehensive validation...
✓ Linting: 0 errors
✓ Type checking: passed
✓ Unit tests: 52/52 passed (95% coverage)
✓ Integration tests: 15/15 passed
✓ Frontend build: 2.3s
✓ Backend build: 1.1s
All checks passed! ✓
Purpose: Technical code review on changed files.
When to use:
- Before committing
- After implementing complex logic
- When you want a second opinion
What it does:
- Analyzes changed files
- Checks for bugs, security issues
- Reviews code style and patterns
- Suggests improvements
Example:
You: /validation:code-review
Agent: Reviewing 3 changed files...
app/routers/items.py:
✓ No issues found
app/services/email.py:
⚠ Warning: Missing error handling for SMTP failures (line 45)
⚠ Warning: Hardcoded email template (line 78)
tests/test_email.py:
✓ No issues found
2 warnings found.
Purpose: Fix issues found in code review.
When to use:
- After
/validation:code-reviewfinds issues - When you want automated fixes
What it does:
- Reads code review output
- Implements fixes for each issue
- Re-runs validation
- Confirms fixes resolved issues
Purpose: Create root cause analysis document.
Parameters: GitLab issue number
When to use:
- When you have a bug to fix
- Before implementing the fix
What it does:
- Reads GitLab issue
- Explores codebase to find root cause
- Analyzes related code
- Creates RCA document with:
- Problem description
- Root cause analysis
- Proposed solution
- Testing strategy
Example:
You: /gitlab_bug_fix:rca 42
Agent: Analyzing issue #42: "Calculation error in date range processing"
Exploring codebase...
Found calculation logic in: app/routers/items.py:850
Analyzing logic...
Root cause: calculate_range() doesn't account for excluded dates
in the date range between entries.
RCA document created: .agents/rca/issue-42.md
Purpose: Implement fix based on RCA document.
Parameters: GitLab issue number
When to use:
- After reviewing RCA document
- Ready to fix the bug
What it does:
- Reads RCA document
- Implements proposed solution
- Adds/updates tests
- Validates fix
- Updates issue with fix details
Purpose: Run UI functionality tests with agent-browser.
When to use:
- After UI changes
- Before releasing features
- For regression testing
What it does:
- Starts local server
- Runs agent-browser test scenarios
- Captures screenshots
- Reports results
Example:
You: /ui-test
Agent: Running UI tests with agent-browser...
✓ Login flow (2.3s)
✓ Create item (1.8s)
✓ Update item (1.2s)
✓ View dashboard (2.1s)
✗ Delete item (failed: button not found)
4/5 tests passed
Screenshots saved: tests/screenshots/
Purpose: User acceptance testing scenarios.
When to use:
- Before major releases
- After implementing user-facing features
- For stakeholder demos
What it does:
- Reads UAT scenarios from
tests/uat/scenarios.yml - Executes with agent-browser
- Generates report with screenshots
- Records video walkthroughs
Example scenarios file:
# tests/uat/scenarios.yml
scenarios:
- name: "New user onboarding"
steps:
- "Navigate to localhost:5173"
- "Verify welcome message displays"
- "Click 'Get Started'"
- "Create first item 'Sample Task'"
- "Verify item appears in dashboard"
- name: "Complete workflow over 7 days"
steps:
- "Login as test user"
- "For each day 1-7: mark 'Sample Task' complete"
- "Verify counter shows '7 days'"
- "Verify completion rate is 100%"Purpose: Full regression suite across all features.
When to use:
- Before releases
- After major refactoring
- Weekly/nightly runs
What it does:
- Runs all unit tests
- Runs all integration tests
- Runs all UI tests with agent-browser
- Compares results with baseline
- Highlights any regressions
Example:
You: /regression-test
Agent: Running full regression suite...
Unit Tests:
✓ 52/52 passed
Integration Tests:
✓ 15/15 passed
UI Tests (agent-browser):
✓ 18/20 passed
✗ Calendar view: Date picker regression
✗ Export feature: CSV download failed
Regression detected: 2 previously passing tests now fail
Details: tests/regression-report.md
Purpose: Create atomic git commit with appropriate tag.
When to use:
- After implementing a feature
- After fixing a bug
- When code is in a committable state
What it does:
- Runs
git statusandgit diff - Analyzes changes
- Generates commit message with conventional commit format
- Adds co-author attribution
- Creates commit
Example:
You: /commit
Agent: Analyzing changes...
Files changed: 3 (app/routers/items.py, tests/test_items.py, README.md)
Proposed commit:
────────────────
feat(items): add archive functionality
- Add PATCH /items/{id}/archive endpoint
- Add archived_at timestamp field
- Filter archived items by default in list endpoint
- Add tests for archive functionality
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
────────────────
Proceed with commit? [Y/n]
Purpose: Install dependencies and start servers.
When to use:
- First time setup
- After pulling changes
- After adding dependencies
What it does:
# Backend
cd backend && uv sync
# Frontend
cd frontend && npm install
# Start servers (in separate processes)
uvicorn app.main:app --reload --port 8000 &
npm run dev &Purpose: Generate Product Requirements Document from conversation.
When to use:
- Starting a new project
- Planning major features
- Documenting requirements
What it does:
- Analyzes conversation for requirements
- Extracts features, user stories, acceptance criteria
- Creates structured PRD in
.claude/docs/PRD.md - Includes API specs, database schema, tech stack
The framework includes comprehensive security scanning and compliance checking integrated into your development workflow.
Purpose: Static Application Security Testing - scan source code for vulnerabilities.
What it scans:
- SQL Injection vulnerabilities
- XSS (Cross-Site Scripting)
- Command Injection
- Hardcoded secrets (complementary to secrets-scan)
- Insecure cryptography
- Authentication/authorization flaws
Tools used:
- Semgrep (primary) - Fast, accurate, language-agnostic
- Bandit (Python-specific)
- ESLint with security plugins (JavaScript/TypeScript)
Example:
/devsecops:sast
# Output:
✓ Scanned 342 files
⚠️ Found 3 issues:
- HIGH: SQL injection risk in backend/app/queries.py:42
- MEDIUM: Missing input validation in frontend/src/api/items.ts:18
- LOW: Weak hash algorithm in backend/app/crypto.py:15
Report: .claude/reports/security/sast-report.jsonPurpose: Scan dependencies for known vulnerabilities (CVEs).
What it scans:
- Python packages (requirements.txt, pyproject.toml)
- Node.js packages (package.json, package-lock.json)
- Known CVEs in dependencies
- Outdated packages with security fixes
Tools used:
- Safety (Python)
- pip-audit (Python)
- npm audit (Node.js)
- Snyk (optional, comprehensive)
Example:
/devsecops:dependency-scan
# Output:
✓ Scanned 156 packages
⚠️ Found 4 vulnerable dependencies:
- CRITICAL: requests@2.28.0 → CVE-2023-32681 (upgrade to 2.31.0)
- HIGH: axios@1.4.0 → CVE-2023-45857 (upgrade to 1.6.0)
- MEDIUM: pillow@9.5.0 → CVE-2023-50447 (upgrade to 10.2.0)
Auto-fix available: npm audit fix
Report: .claude/reports/security/dependency-scan.jsonPurpose: Detect hardcoded secrets, API keys, passwords, and tokens.
What it detects:
- AWS access keys
- Database passwords
- API keys (Stripe, GitHub, GitLab, etc.)
- Private SSH keys
- JWT tokens
- OAuth tokens
- Encryption keys
Tools used:
- Gitleaks (primary, recommended)
- TruffleHog (deep git history scanning)
- GitGuardian (commercial option)
Example:
/devsecops:secrets-scan
# Output:
✓ Scanned 342 files + git history (1,523 commits)
❌ Found 2 secrets:
- CRITICAL: AWS Access Key in backend/config.py:12
- HIGH: Database password in .env.example:8 (should use placeholder)
⚠️ Action Required:
1. Rotate compromised secrets immediately
2. Remove from git history (use BFG Repo-Cleaner)
3. Add to .gitignore
Report: .claude/reports/security/secrets-scan.jsonPurpose: Scan container images for vulnerabilities and misconfigurations.
What it scans:
- OS package vulnerabilities
- Application dependencies in images
- Dockerfile misconfigurations
- CIS Docker Benchmark compliance
- Image layers and size optimization
Tools used:
- Trivy (recommended, comprehensive)
- Grype (alternative)
- Docker Bench (CIS benchmarks)
Example:
/devsecops:container-scan
# Output:
✓ Scanned image: myapp-backend:latest
⚠️ Found 8 vulnerabilities:
- CRITICAL: openssl@3.0.10 → CVE-2023-12345 (upgrade to 3.0.11)
- HIGH: Running as root user (use non-root USER directive)
- MEDIUM: No health check defined
✓ Scanned image: myapp-frontend:latest
✓ No vulnerabilities found
Report: .claude/reports/security/container-scan.jsonPurpose: Validate against security compliance standards.
Standards supported:
- OWASP Top 10 - Web application security
- CIS Docker Benchmark - Container security
- CIS Kubernetes Benchmark - OpenShift/K8s security
- PCI-DSS - Payment card industry (if applicable)
- GDPR - Data privacy (checklist)
- SOC 2 - Trust service criteria
Tools used:
- OWASP ZAP (web application scanning)
- Checkov (infrastructure as code)
- InSpec (custom compliance policies)
- kube-bench (Kubernetes CIS)
Example:
/devsecops:compliance-check
# Output:
✓ OWASP Top 10: 10/10 passed ✓
✓ CIS Docker: 11/12 passed (92%) ⚠️
- WARNING: Content trust not enabled
✓ CIS Kubernetes: 8/8 passed ✓
✓ PCI-DSS: 12/12 passed ✓
✓ GDPR: 8/8 passed ✓
Overall Compliance Score: 95%
Report: .claude/reports/compliance/compliance-report.mdAll DevSecOps scans integrate with GitLab CI:
# .gitlab-ci.yml
security:
stage: security
script:
- /devsecops:sast
- /devsecops:dependency-scan
- /devsecops:secrets-scan
- /devsecops:container-scan
- /devsecops:compliance-check
artifacts:
reports:
sast: gl-sast-report.sarif
dependency_scanning: gl-dependency-scan.json
secret_detection: gl-secrets-report.sarif
container_scanning: gl-container-scan.sarif
allow_failure: false # Block merge if security issues found# Automatically runs:
/devsecops:secrets-scan # Prevent committing secrets# Manual or CI-triggered:
/devsecops:sast
/devsecops:dependency-scan
/devsecops:secrets-scan# Full security validation:
/devsecops:container-scan # Scan images
/devsecops:compliance-check # Validate standards# Comprehensive review:
/devsecops:sast
/devsecops:dependency-scan
/devsecops:secrets-scan
/devsecops:container-scan
/devsecops:compliance-check
# Generate executive summaryThe AI automatically spawns agents when appropriate, but you can manually request them:
"Use the Explore agent to find all database queries in the codebase"
"Spawn a Plan agent to design the authentication system"
"Run a Bash agent to set up git hooks"
Capabilities: Full toolkit access, multi-step reasoning, complex tasks
Use for:
- Open-ended questions
- Multi-file changes
- Research and exploration
- Complex refactoring
Example:
"Refactor the authentication module to use dependency injection"
→ Spawns general-purpose agent with full access to Read, Edit, Write, Bash
Capabilities: All tools except Edit/Write (read-only), deep analysis
Use for:
- Feature planning
- Architecture decisions
- Research before implementation
Example:
/core_piv_loop:plan-feature "Add WebSocket support for real-time updates"
→ Spawns Plan agent to explore codebase and create implementation plan
Capabilities: Fast codebase exploration (Glob, Grep, Read)
Use for:
- Finding patterns
- Understanding code structure
- Quick searches
Thoroughness levels:
quick: Basic searchmedium: Moderate explorationvery thorough: Comprehensive analysis
Example:
"Use Explore agent to find all API endpoints and their authentication requirements"
→ Spawns Explore agent for fast pattern matching
Capabilities: Command execution specialist
Use for:
- Git operations
- Running scripts
- Package management
- Build processes
Example:
"Use Bash agent to create a pre-commit hook that runs linting"
→ Spawns Bash agent for git hook setup
Sequential Agents (one after another):
1. Explore agent finds relevant files
2. Plan agent designs solution
3. General agent implements
4. Bash agent runs tests
Parallel Agents (multiple at once):
"Run tests on backend AND frontend in parallel"
→ Spawns 2 Bash agents concurrently
The framework includes comprehensive configuration in .claude/config/settings.json (604 lines) covering all aspects of development, security, and deployment.
{
"permissions": { /* Tool access control */ },
"hooks": { /* Automation triggers */ },
"mcpServers": { /* GitLab, Playwright, PostgreSQL */ },
"env": { /* Environment variables */ },
"piv": { /* PIV workflow settings */ },
"validation": { /* Linting, testing, build */ },
"code_review": { /* Quality gates */ },
"gitlab": { /* MR/issue templates, CI/CD */ },
"openshift": { /* Cluster configs, resources */ },
"testing_framework": { /* agent-browser, thresholds */ },
"devsecops": { /* Security scanning */ },
"brownfield_analysis": { /* Codebase analysis */ },
"reporting": { /* Report generation */ },
"notifications": { /* Slack, email alerts */ },
"custom_commands": { /* Shortcuts */ },
"advanced": { /* Performance, caching */ }
}Control which tools and commands Claude can use:
{
"permissions": {
"allow": [
"WebSearch",
"Read", "Write", "Edit", "Glob", "Grep",
"Bash(git *)", "Bash(npm *)", "Bash(oc *)"
],
"deny": [
"Bash(rm -rf /)",
"Bash(git push --force*)"
]
}
}Automate actions before/after tool usage:
{
"hooks": {
"post_write": "echo 'File modified: {file_path}'",
"post_commit": "echo 'Committed: {commit_message}'"
}
}Customize the Prime → Implement → Validate workflow:
{
"piv": {
"prime": {
"auto_read_docs": true,
"docs_to_read": [".claude/docs/PRD.md", "README.md"]
},
"plan": {
"output_dir": ".claude/plans/active",
"require_user_approval": true
},
"execute": {
"auto_validate_after": true,
"move_to_completed": true
},
"validate": {
"run_linting": true,
"run_tests": true,
"generate_report": true
}
}
}Configure linting, testing, and build validation:
{
"validation": {
"linting": {
"backend": {
"command": "cd backend && uv run ruff check ."
},
"frontend": {
"command": "cd frontend && npm run lint"
}
},
"testing": {
"backend": {
"unit": {
"command": "cd backend && uv run pytest tests/unit -v",
"coverage_threshold": 80
}
}
}
}
}Multi-environment deployment configuration:
{
"openshift": {
"clusters": {
"dev": {
"url": "https://api.dev-cluster.openshift.com:6443",
"project": "myapp-dev"
},
"prod": {
"url": "https://api.prod-cluster.openshift.com:6443",
"project": "myapp-prod",
"require_approval": true
}
},
"resources": {
"backend": {
"requests": { "memory": "256Mi", "cpu": "100m" },
"limits": { "memory": "512Mi", "cpu": "500m" }
}
}
}
}Security scanning configuration:
{
"devsecops": {
"scanning": {
"sast": {
"enabled": true,
"tool": "semgrep",
"fail_on": ["high", "critical"]
},
"dependency_scan": {
"enabled": true,
"fail_on": ["high", "critical"]
},
"secrets_scan": {
"enabled": true,
"fail_on_detection": true
}
}
}
}Configure codebase analysis for existing projects:
{
"brownfield_analysis": {
"enabled": true,
"scan_paths": ["backend/", "frontend/", "openshift/"],
"exclude_paths": ["node_modules/", "__pycache__/"],
"generate_docs": ["PRD", "CLAUDE", "ARCHITECTURE", "API"],
"use_ai_inference": true,
"analyze_git_history": true
}
}Define shortcuts for common operations:
{
"custom_commands": {
"start_backend": "cd backend && uv run uvicorn app.main:app --reload",
"start_frontend": "cd frontend && npm run dev",
"deploy_dev": "oc apply -k openshift/overlays/dev"
}
}- Copy template settings:
cp .claude/config/settings.json .claude/config/settings.local.json- Update environment variables:
{
"env": {
"PROJECT_ROOT": ".",
"GITLAB_TOKEN": "${GITLAB_PERSONAL_ACCESS_TOKEN}",
"DATABASE_URL": "postgresql://localhost:5432/myapp"
}
}- Configure your tech stack:
{
"validation": {
"linting": {
"backend": {
"command": "cd backend && poetry run pylint src/"
}
}
}
}- Set security thresholds:
{
"devsecops": {
"scanning": {
"dependency_scan": {
"fail_on": ["critical"] // Only block on critical
}
}
}
}- Use environment variables for secrets:
{
"env": {
"GITLAB_TOKEN": "${GITLAB_PERSONAL_ACCESS_TOKEN}"
}
}-
Configure per environment:
settings.json- Template/defaultssettings.local.json- Local overrides (gitignored)settings.prod.json- Production config
-
Set appropriate thresholds:
- Coverage: 80% for unit, 70% for integration
- Security: Fail on critical/high only
- Performance: Adjust timeouts for CI/local
-
Enable only what you need:
{
"devsecops": {
"enabled": true
},
"notifications": {
"enabled": false // Disable if not using
}
}.claude/
├── PRD.md # Product requirements
├── settings.json # Claude configuration
└── reference/ # Best practices docs
├── backend/
│ └── fastapi-best-practices.md # Backend patterns
├── postgres-best-practices.md # Database
├── frontend/
│ ├── react-frontend-best-practices.md # React + TypeScript
│ └── flutter-best-practices.md # Flutter + Dart
├── security/
│ └── cyberark-iam-guide.md # CyberArk IAM for Givaudan
├── testing-and-logging.md # Testing
└── deployment-best-practices.md # Deployment
Purpose: Teach the AI your project's patterns and conventions.
When to create:
- Starting a new project
- Onboarding to an existing codebase
- After establishing patterns
What to include:
-
Technology Overview
# FastAPI Best Practices ## When to Use - High-performance async APIs - Type-safe Python - Auto-generated OpenAPI docs
-
Code Patterns
## Router Pattern ```python from fastapi import APIRouter, Depends router = APIRouter(prefix="/items", tags=["items"]) @router.get("/", response_model=ItemListResponse) def list_items(db: Session = Depends(get_db)): ...
-
Common Gotchas
## Gotchas - Always use `Depends(get_db)` for database sessions - Enable foreign keys in SQLite: `PRAGMA foreign_keys=ON` - Use `Mapped[type]` for SQLAlchemy 2.0 models
-
Examples
## Examples ### Creating an endpoint ... ### Adding validation ...
Model Context Protocol servers extend Claude's capabilities by providing access to external tools and data sources.
Purpose: Browser automation and E2E testing
Installation:
claude mcp add playwright npx @playwright/mcp@latestUsage:
"Use Playwright to test the login flow"
→ AI uses Playwright MCP to control browser
Capabilities:
- Navigate pages
- Click elements
- Fill forms
- Take screenshots
- Run assertions
Purpose: Issue tracking, MR (Merge Request) management, repository operations
Installation:
claude mcp add gitlab npx @modelcontextprotocol/server-gitlabSetup:
export GITLAB_TOKEN=glpat-your_token_here
export GITLAB_URL=https://gitlab.com # or your self-hosted GitLab instanceUsage:
"Create a GitLab issue for the authentication bug"
"List all open merge requests"
"Close issue #42 with a comment"
"Assign MR !15 to @username"
Purpose: Database queries, schema inspection, migrations
Installation:
claude mcp add postgres npx @modelcontextprotocol/server-postgresSetup:
export DATABASE_URL=postgresql://user:password@localhost:5432/mydbUsage:
"Show me the schema for the items table"
"Run a query to find items with counts > 30"
"Generate a migration to add email column to users"
You can create custom MCP servers for your specific needs:
// custom-mcp-server.ts
import { MCPServer } from '@modelcontextprotocol/sdk';
const server = new MCPServer({
name: 'custom-tools',
version: '1.0.0',
tools: [
{
name: 'deploy_to_production',
description: 'Deploy application to production',
inputSchema: { /* ... */ },
handler: async (params) => {
// Deployment logic
}
}
]
}); /\
/ \ ← 10% E2E Tests (agent-browser)
/────\
/ \ ← 20% Integration Tests
/────────\
/ \ ← 70% Unit Tests
/────────────\
Purpose: Test individual functions and components
Tools: pytest (Python), vitest (JavaScript)
Example:
# tests/unit/test_streak.py
def test_calculate_streak_consecutive_days():
completions = [
Completion(completed_date="2025-01-15", status="completed"),
Completion(completed_date="2025-01-14", status="completed"),
Completion(completed_date="2025-01-13", status="completed"),
]
assert calculate_streak(completions, date(2025, 1, 15)) == 3When to use:
- Testing business logic
- Testing utilities and helpers
- TDD (test-driven development)
Purpose: Test API endpoints with real database
Tools: pytest + TestClient, vitest + fetch
Example:
# tests/integration/test_api_items.py
def test_create_item_returns_201(client):
response = client.post("/api/items", json={"name": "Sample Task"})
assert response.status_code == 201
assert response.json()["name"] == "Sample Task"When to use:
- Testing API contracts
- Testing database operations
- Testing error handling
Purpose: Test full user journeys
Tools: agent-browser
Installation:
npm install -g agent-browserExample:
// tests/e2e/item-flow.test.js
describe('Item Creation Flow', () => {
test('User can create and complete an item', async () => {
await agentBrowser.test(`
1. Navigate to localhost:5173
2. Click "Add Item" button
3. Fill in "Sample Task" as item name
4. Fill in "Complete sample workflow" as description
5. Select green color
6. Click "Save"
7. Verify "Sample Task" appears in item list
8. Click "Complete" button next to "Sample Task"
9. Verify item shows "Completed today"
10. Verify counter shows "1"
`);
});
});When to use:
- Testing critical user journeys
- Visual regression testing
- UAT automation
- Cross-browser testing
# Unit tests
cd backend && uv run pytest tests/unit/ -v
# Integration tests
cd backend && uv run pytest tests/integration/ -v
# E2E tests with agent-browser
agent-browser test tests/e2e/
# All tests via validation skill
/validation:validate
# Regression testing
/regression-test❌ Don't:
You: "Implement user authentication"
✅ Do:
You: /core_piv_loop:prime
Agent: Context loaded ✓
You: /core_piv_loop:plan-feature "Implement user authentication"
Why: Agent needs context to create accurate plans.
❌ Don't:
Agent: Plan created. Executing now...
✅ Do:
Agent: Plan created: .claude/plans/auth.md
Please review and approve.
You: [Reviews plan]
You: Looks good, proceed
You: /core_piv_loop:execute
Why: Plans might make assumptions that need correction.
❌ Don't:
Agent: Implementation complete!
You: /commit
✅ Do:
Agent: Implementation complete!
You: /validation:validate
Agent: All tests passed ✓
You: /commit
Why: Catch issues early before they compound.
❌ Don't:
# Inconsistent patterns across codebase
# Some endpoints use Depends(get_db), others use SessionLocal()✅ Do:
# Document in .claude/reference/reference/backend/fastapi-best-practices.md
## Database Sessions
Always use dependency injection:
```python
@router.get("/")
def endpoint(db: Session = Depends(get_db)):
...
**Why**: Agents will follow documented patterns consistently.
---
### 5. Leverage agent-browser for UI Testing
❌ **Don't**:
```python
# Manual testing after every UI change
# OR complex Selenium scripts
✅ Do:
# Natural language test scenarios
/ui-test
# Or specific tests
agent-browser test "
Test login with invalid credentials shows error message
"Why: Faster, more maintainable, easier to write.
❌ Don't:
You: "Fix the bug in issue #42"
Agent: [Randomly modifies code hoping to fix it]
✅ Do:
You: /gitlab_bug_fix:rca 42
Agent: RCA document created ✓
You: [Reviews root cause]
You: /gitlab_bug_fix:implement-fix 42
Why: Systematic analysis prevents band-aid fixes.
❌ Don't:
[Implements 5 features]
You: /commit
Agent: "feat: add multiple features"
✅ Do:
[Implements feature 1]
You: /commit
Agent: "feat(auth): add login endpoint"
[Implements feature 2]
You: /commit
Agent: "feat(auth): add logout endpoint"
Why: Easier to review, revert, and understand history.
❌ Don't:
"Explore the entire codebase to find where we use red color"
→ Spawns very thorough Explore agent (slow)
✅ Do:
"Use quick Explore agent to find CSS files with red color"
→ Spawns quick Explore agent (fast)
Why: Balance speed and thoroughness based on need.
A demonstration of the AI Development Framework in action.
Tech Stack (configurable):
- Backend: Python 3.11, FastAPI, SQLAlchemy, PostgreSQL
- Frontend:
- Web: React 18, Vite, TanStack Query, Tailwind CSS
- Mobile/Cross-platform: Flutter 3.x, Dart, Riverpod
- Testing: pytest, vitest/flutter_test, agent-browser
# 1. Clone
git clone https://github.com/coleam00/template-project
cd template-project
# 2. Install agent-browser
npm install -g agent-browser
# 3. Prime Claude
/core_piv_loop:prime
# 4. Start servers
/init-project
# 5. Run validation
/validation:validate
# 6. Run UI tests
/ui-testtemplate-project/
├── CLAUDE.md # 📘 Complete framework guide (start here!)
├── README.md # Project documentation
├── .claude/
│ ├── config/ # Configuration
│ │ └── settings.json # Claude settings
│ ├── docs/ # Core documentation
│ │ ├── PRD.md # Product requirements
│ │ ├── CLAUDE_BEST_PRACTICES_2026.md # Current best practices (Jan 2026)
│ │ ├── CLAUDE_COMMANDS.md # Complete skills reference & guide
│ │ └── ARCHITECTURE.md # System architecture (optional)
│ ├── reference/ # Best practices (organized by domain)
│ │ ├── backend/ # Backend patterns
│ │ │ └── postgres-best-practices.md
│ │ ├── frontend/ # Frontend patterns
│ │ │ ├── react-best-practices.md
│ │ │ ├── flutter-best-practices.md
│ │ │ └── vuejs-best-practices.md
│ │ ├── devops/ # DevOps patterns
│ │ │ └── deployment-best-practices.md
│ │ ├── gitlab/ # GitLab best practices
│ │ │ └── gitlab-best-practices.md
│ │ └── testing/ # Testing patterns
│ │ └── testing-and-logging.md
│ ├── skills/ # Command definitions (numbered by priority)
│ │ ├── 01-piv-loop/ # Core workflow (highest priority)
│ │ │ ├── prime.md
│ │ │ ├── plan-feature.md
│ │ │ └── execute.md
│ │ ├── 02-validation/ # Quality checks
│ │ │ ├── validate.md
│ │ │ ├── code-review.md
│ │ │ ├── code-review-fix.md
│ │ │ ├── execution-report.md
│ │ │ └── system-review.md
│ │ ├── 03-bug-fixing/ # Problem solving
│ │ │ ├── rca.md
│ │ │ └── implement-fix.md
│ │ ├── 04-testing/ # Testing skills (agent-browser)
│ │ │ ├── ui-test.md
│ │ │ ├── uat-test.md
│ │ │ └── regression-test.md
│ │ ├── 05-utilities/ # Helper commands
│ │ │ ├── commit.md
│ │ │ ├── init-project.md
│ │ │ ├── create-prd.md
│ │ │ ├── analyze-brownfield.md
│ │ │ └── ui-ux-review.md
│ │ └── 06-devsecops/ # Security scanning
│ │ ├── sast.md
│ │ ├── dependency-scan.md
│ │ ├── secrets-scan.md
│ │ ├── container-scan.md
│ │ └── compliance-check.md
│ ├── plans/ # Implementation plans
│ │ ├── active/ # Currently being worked on
│ │ ├── completed/ # Finished implementations
│ │ └── templates/ # Plan templates
│ ├── rca/ # Root cause analysis
│ │ ├── active/ # Open issues
│ │ └── resolved/ # Fixed issues
│ ├── reports/ # Generated reports
│ │ ├── validation/ # Validation reports
│ │ ├── code-review/ # Code review reports
│ │ └── execution/ # Execution reports
│ └── scratch/ # Temporary notes and working files
├── backend/
│ ├── app/
│ │ ├── main.py # FastAPI entry
│ │ ├── database.py # PostgreSQL pool
│ │ ├── models.py # SQLAlchemy models
│ │ ├── schemas.py # Pydantic schemas
│ │ └── routers/ # API endpoints
│ └── tests/ # pytest tests
├── frontend/
│ ├── src/
│ │ ├── features/ # Feature modules
│ │ ├── components/ # React components
│ │ └── lib/ # Utilities
│ └── tests/ # vitest tests
├── tests/
│ └── e2e/ # agent-browser tests
└── README.md # This file
Scenario: Add email notifications for user milestones
# 1. Prime
/core_piv_loop:prime
# Agent: Context loaded ✓
# 2. Plan
/core_piv_loop:plan-feature "Add email notifications when user reaches milestones"
# Agent: Plan created → .claude/plans/email-notifications.md
# 3. Review Plan
# [Review the generated plan]
# 4. Execute
/core_piv_loop:execute
# Agent: Step 1/6: Create email service... ✓
# Step 2/6: Add SMTP configuration... ✓
# ...
# All steps complete ✓
# 5. Validate
/validation:validate
# Agent: ✓ Linting passed
# ✓ Tests: 58/58 passed
# ✓ Build successful
# 6. UI Test
/ui-test
# Agent: ✓ Email notification shows in UI
# ✓ Settings page allows email toggle
# 7. Commit
/commit
# Agent: feat(notifications): add email alerts for 7-day streaks# Core PIV Loop
/core_piv_loop:prime # Load project context
/core_piv_loop:plan-feature # Plan implementation
/core_piv_loop:execute # Execute plan
# Validation
/validation:validate # Full validation suite
/validation:code-review # Technical code review
/validation:code-review-fix # Fix review issues
/validation:execution-report # Generate execution report
/validation:system-review # Review system architecture
# Bug Fixing
/gitlab_bug_fix:rca # Root cause analysis
/gitlab_bug_fix:implement-fix # Implement fix
# Testing
/ui-test # UI functionality tests
/uat-test # User acceptance tests
/regression-test # Full regression suite
# DevSecOps
/devsecops:sast # Static application security testing
/devsecops:dependency-scan # Vulnerable dependency detection
/devsecops:secrets-scan # Hardcoded secrets detection
/devsecops:container-scan # Container image vulnerabilities
/devsecops:compliance-check # Standards compliance (OWASP, CIS)
# Utilities
/commit # Create git commit
/init-project # Setup and start servers
/create-prd # Generate PRD (greenfield)
/brownfield:analyze # Analyze existing codebase (brownfield)
/ui-ux-review # Review UI/UX design and accessibility- Structured workflow via the PIV loop (Prime → Implement → Validate)
- 23 specialized skills across 6 categories:
- Core PIV Loop (3 skills)
- Validation (5 skills)
- Bug Fixing (2 skills)
- Testing (3 skills)
- DevSecOps (5 skills)
- Utilities (5 skills)
- Brownfield & Greenfield support:
- Analyze existing codebases and auto-generate documentation
- Create new projects from scratch with PRD generation
- Comprehensive DevSecOps integration:
- SAST (Static Application Security Testing)
- Dependency vulnerability scanning
- Secrets detection in code and git history
- Container image security scanning
- Compliance checking (OWASP, CIS, PCI-DSS, GDPR)
- OpenShift deployment with production-grade configurations
- UI/UX review with accessibility validation (WCAG 2.1 AA)
- GitLab integration via MCP for issues, MRs, and CI/CD
- agent-browser for AI-driven UI testing without Selenium code
- Reference documentation organized by domain (backend, frontend, devops, gitlab, testing)
- 604 lines of configuration for complete framework customization
- Specialized agents for different task types (General-Purpose, Plan, Explore, Bash)
- Comprehensive validation at every step with automated reporting
- Plan before implementing - Avoid coding in the dark
- Validate continuously - Catch issues early
- Document patterns - Teach the AI your conventions
- Test thoroughly - Unit → Integration → E2E
- Commit atomically - Clean, reviewable history
- 📘 Primary Guide: START HERE -
CLAUDE.md- Complete guide to using Claude Code with this framework (architecture, workflows, all 23 skills, best practices, troubleshooting) - 📅 Best Practices 2026:
.claude/docs/CLAUDE_BEST_PRACTICES_2026.md- Current best practices, what's new, modern workflows, security-first approach (Updated: Jan 29, 2026) - 📖 Commands Reference:
.claude/docs/CLAUDE_COMMANDS.md- Complete guide to all 23 skills, workflows, and troubleshooting - Reference Documentation: Read best practices in
.claude/reference/(organized by domain: backend/, frontend/, devops/, gitlab/, testing/, security/) - 🔒 CyberArk IAM Guide:
.claude/reference/security/cyberark-iam-guide.md- Givaudan-specific guide for secrets management, credential vault access, and secure application development - Example Plans: Review completed plans in
.claude/plans/completed/ - RCA Documents: Check root cause analyses in
.claude/rca/ - Reports: View validation and security reports in
.claude/reports/ - Configuration: Review comprehensive settings in
.claude/config/settings.json(604 lines) - Skills Documentation: Check
.claude/skills/for individual skill definitions (23 skills across 6 categories) - Brownfield Projects: Use
/brownfield:analyzeto generate documentation from existing code - Security: Run DevSecOps scans before deployment (SAST, dependency, secrets, container, compliance)
- UI/UX: Use
/ui-ux-reviewto validate design and accessibility - Quick Commands:
/validation:validate- When in doubt about code quality/core_piv_loop:prime- To refresh context/devsecops:sast- To check for security vulnerabilities/ui-test- To validate UI functionality
- Claude Code Documentation: https://docs.anthropic.com/claude-code
- MCP Protocol: https://modelcontextprotocol.io
- agent-browser: https://github.com/agent-tools/agent-browser
- Example Project: https://github.com/coleam00/template-project
Built with ❤️ using the AI Development Framework
