A streamlined collection of 20 production-ready slash commands for ML, deep learning, bioinformatics, and scientific computing workflows.
This customized command set is optimized for:
- Machine Learning & Deep Learning pipelines
- Bioinformatics data analysis and research
- Scientific Computing with test-driven development
- Research Projects with structured feature specification and planning
- Data Science workflows with ETL and literature research
Note: These commands appear as /tdd-cycle, /specify-feature, etc. The workflows/ subdirectory is organizational only.
| Command | Purpose | Use Case |
|---|---|---|
/tdd-cycle |
Test-driven development orchestration | Implement algorithms with test coverage |
/data-driven-feature |
ML-powered functionality development | Feature engineering, model deployment |
/ml-pipeline |
ML pipeline orchestration | End-to-end model training, validation, optimization |
/smart-fix |
Intelligent debugging | Investigate model bugs, data issues, research problems |
/full-review |
Multi-perspective code analysis | Review model code, architecture, research implementations |
/performance-optimization |
System-wide optimization | Optimize model inference, query performance, data loading |
/workflow-automate |
CI/CD pipeline automation | Automate experiments, training runs, batch processing |
/deep-web-research |
Comprehensive literature research | Find papers, methodologies, benchmark results, state-of-art |
/specify-feature |
Structured feature specification | Create detailed requirements for ML/research features |
/feature-plan |
Implementation planning | Generate design, contracts, and task strategy from specifications |
Note: These commands appear as /tdd-red, /tdd-green, etc. The tools/ subdirectory is organizational only.
| Command | Purpose | Use Case |
|---|---|---|
/tdd-red |
Failing test creation | Create edge case tests for algorithms |
/tdd-green |
Minimal implementation | Implement code to pass tests |
/tdd-refactor |
Code optimization | Refactor while maintaining test integrity |
/data-pipeline |
ETL/ELT architecture | Build data ingestion, transformation, loading pipelines |
/code-explain |
Code documentation | Document complex algorithms, ML models, research code |
/think |
Structured reasoning framework | Multi-angle problem analysis for design decisions |
/newskill |
Create custom skills | Build domain-specific Claude skills on-the-fly |
/context-save |
State persistence | Save architecture decisions, experimental configurations |
/context-restore |
State recovery | Restore previous decisions, configurations, context |
/open-research |
Access research reports | Retrieve saved literature reviews and research findings |
# Navigate to Claude configuration directory
cd ~/.claude
# Clone the commands repository into the correct directory
git clone https://github.com/flight505/ClaudeCommands.git commands
# The commands will now be available in ~/.claude/commands/
# Commands appear as: /tdd-cycle, /tdd-red, /specify-feature, etc.
# Subdirectories (workflows/, tools/) are organizational and show in descriptionsImportant: Claude Code uses subdirectories for organization only. Commands are invoked directly without namespace prefixes.
Actual command syntax:
- Commands in
workflows/→/tdd-cycle,/specify-feature,/ml-pipeline, etc. - Commands in
tools/→/tdd-red,/tdd-green,/think, etc. - Subdirectories show in descriptions as "(project:workflows)" or "(project:tools)" for organization
# Workflow commands (from workflows/ subdirectory)
/tdd-cycle implement batch normalization algorithm
/specify-feature create neural network for genomic classification
/feature-plan specs/001-genome-classifier/spec.md
/deep-web-research latest advances in transformer models for protein structure prediction
/data-driven-feature extract features from genomic sequences
# Tool commands (from tools/ subdirectory)
/tdd-red create edge case tests for sequence alignment
/tdd-green implement algorithm to pass tests
/think comparing approach A vs approach B
/code-explain document architecture and componentsThis section provides a clear path from initial idea to working implementation. Choose the path that best fits your needs.
Best for: ML/bioinformatics projects, structured development, team collaboration, template-driven workflows
# Step 1: Research (if exploring new technologies)
/deep-web-research [technology or methodology]
# Step 2: Think through design decisions
/think comparing [approach A] vs [approach B] for [problem]
# Step 3: Create structured specification
/specify-feature [detailed feature description]
# Creates: specs/[###-feature-name]/spec.md
# Step 4: Generate implementation plan
/feature-plan specs/001-feature-name/spec.md
# Creates: plan.md, research.md, data-model.md, contracts/, quickstart.md
# Step 5: Review generated documents
# - Check research.md for technical decisions
# - Verify data-model.md for data structures
# - Review contracts/ for interface definitions
# Step 6: Implement with TDD
/tdd-red create failing tests based on contracts and requirements
/tdd-green implement minimal solution to pass tests
/tdd-refactor improve code quality while keeping tests green
# OR use complete TDD workflow for larger features
/tdd-cycle implement feature based on plan.md
# Step 7: Integration & optimization (as needed)
/ml-pipeline [if ML feature]
/performance-optimization [if needed]
# Step 8: Documentation & context
/code-explain document architecture and components
/context-save save progress and configurationOutput Structure:
specs/[###-feature-name]/
├── spec.md # Feature specification
├── plan.md # Implementation plan
├── research.md # Technical research
├── data-model.md # Data structures
├── contracts/ # API contracts
└── quickstart.md # Quick start guide
Best for: Exploratory work, interactive guidance, solo development, conversational workflows
# Step 1: Use kiro-spec-creator agent interactively
# Creates: .kiro/specs/{feature}/requirements.md
# Guides through: Requirements → Design → Tasks
# Step 2: Implement tasks one at a time
# Use kiro-task-executor agent for focused implementation
# Step 3: Use TDD tools for implementation
/tdd-red create failing tests
/tdd-green implement minimal solution
/tdd-refactor improve code qualityOutput Structure:
.kiro/specs/{feature}/
├── requirements.md # Requirements and user stories
├── design.md # Architecture and design
└── tasks.md # Implementation tasks
| Aspect | Specify Framework | Kiro Method |
|---|---|---|
| Best For | ML/bioinformatics, structured projects, teams | Exploratory work, interactive guidance |
| Approach | Template-driven, automated | Conversation-based, approval-driven |
| Output Location | specs/[###-feature-name]/ |
.kiro/specs/{feature}/ |
| Templates | Uses .specify/templates/ |
Custom structure |
| Constitution | Enforces .specify/memory/constitution.md |
Flexible principles |
| Research | Integrated in planning phase | Manual or embedded |
| Speed | Faster (automated) | Slower (interactive) |
| Consistency | High (template-driven) | Medium (conversation-based) |
| Convergence | Both paths use same TDD tools for implementation | Same TDD tools |
Choose Specify Framework if:
- ✅ Building ML/bioinformatics projects
- ✅ Working with a team (consistent structure)
- ✅ Want template-driven output
- ✅ Need constitution compliance
- ✅ Prefer automation over interaction
Choose Kiro Method if:
- ✅ Exploring new ideas interactively
- ✅ Want step-by-step guidance
- ✅ Prefer conversational workflow
- ✅ Working solo on exploratory features
- ✅ Need flexibility in structure
Both paths converge at implementation:
- Both use the same TDD tools (
/tdd-red,/tdd-green,/tdd-refactor) - Both can use
/ml-pipelinefor ML features - Both can use optimization and debugging workflows
This is the most powerful workflow - from discovery to implementation:
# Step 1: Discover and research
/deep-web-research latest approaches for [your technology/domain]
# Step 2: Think through design tradeoffs
/think comparing [approach A] vs [approach B] for [specific problem]
# Step 3: Create structured specification
/specify-feature [detailed feature description with requirements]
# Step 4: Plan implementation with architecture and contracts
/feature-plan specs/001-feature-name/spec.md
# Step 5: Review the generated plan
# - Check generated research.md for tech decisions
# - Verify data-model.md for entity/data structures
# - Review contracts/ for interface definitions
# Step 6: Start TDD development
/tdd-red create failing tests based on contracts and requirements
# Step 7: Implement
/tdd-green implement minimal solution to pass tests
# Step 8: Optimize
/tdd-refactor improve code quality while keeping tests green
# Step 9: Build complete system
/ml-pipeline or custom workflow for integration
# Step 10: Performance tune
/performance-optimization optimize critical paths and bottlenecks
# Step 11: Document
/code-explain document architecture and key components
# Step 12: Save project state
/context-save save progress and configuration for later sessions# Create failing test with edge cases
/tdd-red create comprehensive tests with edge cases for [your algorithm]
# Implement minimal algorithm
/tdd-green implement algorithm to pass all tests
# Optimize and refactor
/tdd-refactor optimize performance while keeping tests green
# Or orchestrate complete TDD cycle
/tdd-cycle [feature description] with comprehensive test coverage# Complete feature with data/model orchestration
/data-driven-feature [feature description with data/model components]
# Analyze and optimize performance
/performance-optimization optimize [critical component] performance
# Debug behavior
/smart-fix investigate [specific issue or unexpected behavior]# Build data pipeline
/data-pipeline build ETL pipeline for [data source] with [transformations]
# Create training or processing workflow
/ml-pipeline [workflow description] with validation
# Automate experiments or jobs
/workflow-automate [experiment/task] with [parameters]# Conduct comprehensive research
/deep-web-research [topic, technology, or methodology]
# Think through design decisions
/think analyzing [approach A] vs [approach B] tradeoffs
# Save findings for later
/context-save save research notes and references
# Retrieve previous research
/open-research [topic or keyword]
# Code review and documentation
/full-review [code/architecture for review]
/code-explain document [complex component or algorithm]# Create a reusable skill for your domain
/newskill create a skill for [domain-specific task] with [specific requirements]
# Create a skill for specialized workflows
/newskill create a skill for [specialized workflow] with [key features]Your project includes two complementary approaches for structured development:
The Specify framework provides templates and project principles for consistent feature development. This is the recommended path when you want structured, template-driven delivery.
.specify/
├── memory/
│ └── constitution.md # Project principles and engineering guidelines
├── templates/
│ ├── spec-template.md # Feature specification template
│ ├── plan-template.md # Implementation plan template
│ ├── tasks-template.md # Task generation template
│ └── agent-file-template.md # AI agent instruction template
└── scripts/bash/
├── setup-plan.sh # Initialize new features
├── create-new-feature.sh # Create feature directories
└── update-agent-context.sh # Update AI agent context
Constitution Principles:
- Test-Driven Development (mandatory)
- Reproducibility First
- Data Quality & Validation
- Performance & Scalability
- Documentation & Clarity
- Simplicity & YAGNI
Command Sequence:
/constitution– review or update.specify/memory/constitution.mdso project principles are current./specify-feature– generate or refreshspecs/[###]/spec.mdusingspec-template.md./feature-plan– produceplan.md,research.md,data-model.md,contracts/, andquickstart.mdfrom the approved spec./tasks– convert the plan outputs into actionable steps withtasks-template.md(ready for/implementor manual execution).
Workflow:
- Specification (
/specify-feature) → Createsspec.md - Planning (
/feature-plan) → Createsplan.md,research.md,data-model.md,contracts/ - Implementation → Uses TDD tools or workflows
Five specialized agent personas for interactive, conversational development. Use this for exploratory work or when you want step-by-step guidance.
| Agent | Role | Use For |
|---|---|---|
| kiro-assistant | General support | Quick help, code review, troubleshooting |
| kiro-spec-creator | Spec workflow | Feature requirements, design, tasks |
| kiro-feature-designer | Architecture | System design, components, data models |
| kiro-task-planner | Task generation | Creating actionable implementation tasks |
| kiro-task-executor | Implementation | Executing tasks one at a time |
Workflow:
- Requirements (kiro-spec-creator) → Creates
requirements.md - Design (kiro-feature-designer) → Creates
design.md - Tasks (kiro-task-planner) → Creates
tasks.md - Implementation (kiro-task-executor) → Executes tasks one by one
See agents/README.md for detailed information.
- Both paths converge at the implementation phase using the same TDD tools
- Specify Framework is recommended for ML/bioinformatics projects
- Kiro Method is good for exploratory or interactive development
- You can mix approaches: use Specify for structure, Kiro agents for implementation guidance
- Technology Stack: Specify frameworks (PyTorch, TensorFlow, scikit-learn), data formats (FASTA, HDF5, Parquet)
- Data Constraints: Include dataset size, memory requirements, compute resources
- Integration Requirements: Specify databases, external tools, APIs
- Output Preferences: Indicate testing framework (pytest), documentation format
# 1. Discover literature
/deep-web-research machine learning approaches to protein function prediction
# 2. Think through methodology
/think comparing supervised vs unsupervised learning for protein classification
# 3. Save research context
/context-save research papers and recommended methodologies
# 4. Later: retrieve research
/open-research protein classification# 1. Create specification with requirements
/specify-feature <your feature description>
# Creates: specs/001-feature-name/spec.md
# 2. Plan implementation from spec
/feature-plan specs/001-feature-name/spec.md
# Creates: plan.md, research.md, data-model.md, contracts/, quickstart.md
# 3. Review generated documents
# - research.md: technical decisions
# - data-model.md: data structures
# - contracts/: API contracts
# - quickstart.md: quick start guide# 1. Use kiro-spec-creator agent interactively
# Creates: .kiro/specs/{feature}/requirements.md
# Guides through: Requirements → Design → Tasks
# 2. Use kiro-task-executor for implementation
# Executes tasks one at a time from tasks.md# Complete research pipeline
/specify-feature extract features from bioinformatics dataset
/feature-plan specs/001-feature/spec.md
/ml-pipeline train and validate predictive model
/performance-optimization optimize model inference speed
/code-explain document model architecture and methods
/context-save save model configuration and results# Save project state
/context-save deep learning model checkpoints and experiment parameters
# Later, restore context
/context-restore continue model development with saved parameters- Workflows typically require 30-90 seconds for complete orchestration
- Tools execute in 5-30 seconds for focused operations
- Research workflows take 3-5 minutes for comprehensive literature review
- Feature planning typically takes 2-3 minutes for design generation
- Provide detailed requirements upfront to minimize iteration cycles
- Use
context-save/context-restorefor multi-session projects - Use
deep-web-research+open-researchfor building research libraries
Important: Commands must be in ~/.claude/commands/ (not ~/.claude/ClaudeCommands/). Subdirectories (workflows/, tools/) are organizational only and don't create namespace prefixes.
~/.claude/commands/ # ← Must be named "commands" (lowercase)
├── workflows/ # Organizational subdirectory
│ ├── tdd-cycle.md # Creates command: /tdd-cycle (shows as "project:workflows")
│ ├── data-driven-feature.md
│ ├── ml-pipeline.md
│ ├── deep-web-research.md
│ ├── specify-feature.md
│ ├── feature-plan.md
│ └── ...
├── tools/ # Organizational subdirectory
│ ├── tdd-red.md # Creates command: /tdd-red (shows as "project:tools")
│ ├── tdd-green.md
│ ├── tdd-refactor.md
│ ├── think.md
│ ├── newskill.md
│ ├── open-research.md
│ └── ...
└── README.md
.specify/ # Specify framework (integrated)
├── memory/
│ └── constitution.md # Project principles and engineering guidelines
├── templates/ # Specification and planning templates
│ ├── spec-template.md
│ ├── plan-template.md
│ ├── tasks-template.md
│ └── agent-file-template.md
└── scripts/bash/ # Helper scripts (optional utilities)
docs/ # Generated by workflows
├── research/ # Research reports from deep-web-research
│ ├── index.jsonl # Index of research reports
│ └── YYYY-MM-DD_HH-mm-topic-slug.md
specs/ # Feature specifications and plans
├── 001-feature-name/
│ ├── spec.md # Specification
│ ├── plan.md # Implementation plan
│ ├── research.md # Technical research
│ ├── data-model.md # Data structures
│ ├── contracts/ # API contracts
│ ├── quickstart.md # Quick start guide
│ └── tasks.md # Implementation tasks
# 1. Research technologies and approaches
/deep-web-research [technology/approach for your feature]
# 2. Think through design decisions
/think comparing [option A] vs [option B] for [specific concern]
# 3. Create detailed feature specification
/specify-feature [complete feature description]
# 4. Generate implementation plan
/feature-plan specs/001-feature-name/spec.md
# 5. Review specifications
# - Verify research.md explains architectural decisions
# - Check data-model.md for data structures
# - Ensure contracts/ define clear boundaries
# 6. Create failing tests
/tdd-red create comprehensive failing tests
# 7. Implement minimal solution
/tdd-green implement to pass tests
# 8. Optimize and refactor
/tdd-refactor improve code quality
# 9. Build complete system
/data-driven-feature [full feature integration]
# 10. Validate and optimize
/performance-optimization optimize critical components
# 11. Document code
/code-explain document architecture and components
# 12. Save project state
/context-save save progress for future sessions# Session 1: Research a topic
/deep-web-research [your research topic]
/open-research # Open the saved research
# Session 2: Research another topic
/deep-web-research [another research topic]
/think [analyze findings and tradeoffs]
# Later: Retrieve previous research
/open-research [keyword or partial topic name]
# Build knowledge base
/context-save save important research and referencesWhen extending .claude such as developing new skills, tools, or workflows, refer to the following sites for inspiration, ideas, or ready-made plugins. Claude Code can use these resources to identify commands, agents, or workflows relevant to your needs.
- Claude Code Commands — Explore available slash commands for Claude Code
- [Claude Code Commands Search](https://slashcommands.cc/?search= ...) — Search for specific Claude Code commands
- Claude Code Agents — Discover agent templates and automations
- Claude Code Plugins Marketplace — Browse and recommend plugins for extended functionality
These resources are especially helpful for evolving .claude configuration or when Claude Code is tasked with suggesting new capabilities.
- Claude Code Official Documentation
- Slash Commands Reference
- GitHub Spec Kit - Source of
.specifyframework
MIT License - See LICENSE file for complete terms.