Version: 2.0
Status: π Production Ready
Last Updated: November 15, 2025
A comprehensive framework for structured, LLM-assisted software development with real-time progress tracking, parallel execution support, and production-ready libraries.
The LLM Methodology is a battle-tested framework for managing complex software development with AI assistance. It provides:
- 5-Tier Planning Hierarchy: From strategic vision (NORTH_STAR) to executable tasks (EXECUTION_TASK)
- Real-Time Dashboard: Rich terminal UI for tracking progress across multiple plans
- Production Libraries: Logging, metrics, error handling, caching, and more
- Parallel Execution: Intelligent dependency analysis and batch execution
- Automated Workflows: Scripts for generation, validation, and archiving
Perfect for teams building complex systems with LLM assistance, or solo developers managing ambitious projects.
python LLM/main.pyRich terminal dashboard showing:
- Active Plans: Visual progress tracking (11/18 achievements, 61%)
- Parallel Opportunities: Automatic detection with time savings (67% faster)
- Real-Time Updates: Filesystem monitoring with auto-refresh
- Quick Actions: One-key shortcuts for common operations
- Health Scores: Plan health metrics (0-100) with component breakdown
NORTH_STAR (800-2,000 lines) Strategic vision, principles
β
GRAMMAPLAN (600-1,500 lines) Coordinates 4-6 PLANs
β
PLAN (300-900 lines) Feature/project scope
β
SUBPLAN (200-600 lines) Single achievement design
β
EXECUTION_TASK (<200 lines) Executable work unit
Each tier has:
- Size Limits: Enforced via validation scripts
- Templates: Copy-paste ready structures
- Protocols: Step-by-step workflows
- Guides: Comprehensive how-tos
Automatically detect and execute independent achievements in parallel:
# Dashboard shows parallel opportunities
Level 0 Group (3 achievements):
ββ 3.1: Performance Optimization
ββ 3.2: Error Handling Enhancement
ββ 3.3: Documentation Update
Time: 3.5h parallel vs 10.5h sequential
Savings: 7.0h (67%)Batch creation with dependency tracking:
python LLM/scripts/generation/batch_subplan.py --plan-path work-space/plans/MY-PLAN --level 0
python LLM/scripts/generation/batch_execution.py --plan-path work-space/plans/MY-PLAN --level 0Core Libraries (LLM/core/libraries/):
- Logging: Structured JSON logging with context, Loki integration
- Metrics: Prometheus-compatible counters, histograms, gauges
- Error Handling: Structured exceptions with context and suggestions
- Retry: Exponential backoff with configurable policies
- Caching: LRU cache with TTL and mtime-based invalidation
- Validation: Rule-based validation with clear error messages
- Serialization: Pydantic-based serialization with custom encoders
- Rate Limiting: Token bucket rate limiter
- Concurrency: Async execution with TPM tracking
- Database: MongoDB operations with error handling
All libraries are:
- β Fully tested (>90% coverage)
- β Type-hinted
- β Documented with examples
- β Production-proven
# Clone repository
git clone <repository-url>
cd context_manager
# Install dependencies
pip install -r requirements.txtDependencies:
- Python 3.8+
- pydantic>=2.7.0 (data validation)
- pymongo>=4.7.0 (MongoDB)
- openai>=1.42.0 (LLM integration)
- pyyaml>=6.0 (configuration)
- rich>=13.0.0 (terminal UI)
# Main dashboard (all plans)
python LLM/main.py
# Specific plan
python LLM/main.py --plan 1# 1. Read the methodology
cat LLM/LLM-METHODOLOGY.md
# 2. Use the template
cp LLM/templates/PLAN-TEMPLATE.md work-space/plans/PLAN_MY-FEATURE.md
# 3. Follow the start protocol
cat LLM/protocols/IMPLEMENTATION_START_POINT.md# Interactive mode (auto-detects workflow state)
python LLM/scripts/generation/generate_prompt.py @MY-PLAN
# Create SUBPLAN for achievement 1.2
python LLM/scripts/generation/generate_prompt.py @MY-PLAN --achievement 1.2 --interactive
# Continue execution
python LLM/scripts/generation/generate_prompt.py continue @EXECUTION_TASK_MY-PLAN_12_01.mdcontext_manager/
βββ README.md # This file
βββ requirements.txt # Python dependencies
βββ LLM-METHODOLOGY.md # Complete methodology reference
β
βββ LLM/ # Core framework
β βββ main.py # Dashboard entry point
β βββ README.md # LLM folder documentation
β β
β βββ core/ # Production libraries
β β βββ libraries/ # Reusable libraries
β β βββ logging/ # Structured logging
β β βββ metrics/ # Prometheus metrics
β β βββ error_handling/ # Exception framework
β β βββ retry/ # Retry policies
β β βββ caching/ # LRU cache
β β βββ validation/ # Rule-based validation
β β βββ serialization/ # JSON serialization
β β βββ rate_limiting/ # Rate limiter
β β βββ concurrency/ # Async execution
β β βββ database/ # MongoDB operations
β β
β βββ dashboard/ # Interactive dashboard
β β βββ main_dashboard.py # Main dashboard UI
β β βββ plan_dashboard.py # Plan-specific UI
β β βββ state_detector.py # State analysis
β β βββ parallel_detector.py # Parallel detection
β β βββ workflow_executor.py # Action execution
β β βββ metrics.py # Dashboard metrics
β β
β βββ scripts/ # Automation tools
β β βββ generation/ # Prompt generation
β β β βββ generate_prompt.py # Main orchestrator
β β β βββ batch_subplan.py # Batch SUBPLAN creation
β β β βββ batch_execution.py # Batch EXECUTION creation
β β βββ validation/ # Size & structure validation
β β β βββ check_plan_size.py
β β β βββ validate_achievement_completion.py
β β β βββ validate_subplan_executions.py
β β βββ archiving/ # Archive completed work
β β βββ manual_archive.py
β β
β βββ templates/ # Copy-paste templates
β β βββ NORTH_STAR-TEMPLATE.md
β β βββ GRAMMAPLAN-TEMPLATE.md
β β βββ PLAN-TEMPLATE.md
β β βββ SUBPLAN-TEMPLATE.md
β β βββ EXECUTION_TASK-TEMPLATE.md
β β βββ PROMPTS.md # Ready-to-use prompts
β β
β βββ guides/ # How-to guides
β β βββ NORTH-STAR-GUIDE.md
β β βββ GRAMMAPLAN-GUIDE.md
β β βββ SUBPLAN-WORKFLOW-GUIDE.md
β β βββ FOCUS-RULES.md
β β
β βββ protocols/ # Workflow protocols
β β βββ IMPLEMENTATION_START_POINT.md
β β βββ CREATE_SUBPLAN.md
β β βββ CREATE_EXECUTION.md
β β βββ IMPLEMENTATION_RESUME.md
β β βββ IMPLEMENTATION_END_POINT.md
β β
β βββ tests/ # Comprehensive tests
β β βββ dashboard/ # Dashboard tests (232+ tests)
β β βββ scripts/ # Script tests
β β
β βββ docs/ # Additional documentation
β βββ ERROR_HANDLING_PATTERNS.md
β βββ PERFORMANCE_OPTIMIZATION_GUIDE.md
β βββ FEEDBACK_SYSTEM_GUIDE.md
β
βββ work-space/ # Active work directory
βββ north-stars/ # Strategic vision documents
βββ grammaplans/ # Multi-plan coordination
βββ plans/ # Active plans (17+)
β βββ PLAN_NAME/
β βββ PLAN_NAME.md
β βββ subplans/ # Achievement designs
β βββ execution/ # Execution tasks
β β βββ feedbacks/ # APPROVED/FIX files
β βββ parallel.json # Parallel execution config
βββ analyses/ # Strategic analyses (125+)
βββ knowledge/ # Learnings & patterns
βββ archive/ # Completed work
Start with the core concepts:
# Read the methodology overview
cat LLM-METHODOLOGY.md
# Understand the 4-phase workflow
cat LLM/guides/SUBPLAN-WORKFLOW-GUIDE.mdKey Concepts:
- 5-tier hierarchy (NORTH_STAR β EXECUTION_TASK)
- 4-phase workflow (Design β Plan β Execute β Synthesize)
- Context budgets (what each agent reads)
- Filesystem-first tracking (no manual updates)
# Launch and explore
python LLM/main.py
# Try different views
# - Main dashboard (all plans)
# - Plan dashboard (detailed view)
# - Actions menu (execute, create, review)Dashboard Features:
- Press
1-6to select actions - Press
rto refresh state - Press
bto go back - Press
sfor settings (themes, auto-copy)
Follow the complete workflow:
# 1. Copy template
cp LLM/templates/PLAN-TEMPLATE.md work-space/plans/PLAN_SAMPLE-FEATURE.md
# 2. Edit plan (define 3-5 achievements)
# Add Achievement Index with clear goals
# 3. Create SUBPLAN for first achievement
python LLM/scripts/generation/generate_prompt.py @SAMPLE-FEATURE --achievement 1.1 --interactive
# 4. Review SUBPLAN in work-space/plans/SAMPLE-FEATURE/subplans/
# 5. Create EXECUTION_TASK
python LLM/scripts/generation/generate_prompt.py @SAMPLE-FEATURE --achievement 1.1 --continue
# 6. Execute work following EXECUTION_TASK instructions
# 7. Request review (create APPROVED or FIX feedback file)# 1. Create parallel.json for your plan
cat work-space/plans/PARALLEL-EXECUTION-AUTOMATION/parallel.json
# 2. Define achievements with dependencies
{
"plan": "SAMPLE-FEATURE",
"achievements": [
{"id": "2.1", "dependencies": []},
{"id": "2.2", "dependencies": []},
{"id": "2.3", "dependencies": ["2.1", "2.2"]}
]
}
# 3. Dashboard shows parallel opportunities automatically
python LLM/main.py --plan SAMPLE-FEATURE
# 4. Execute parallel group
# Select action 2 (Execute Parallel Group)- LLM-METHODOLOGY.md - Start here (complete reference)
- LLM/README.md - LLM folder structure and contents
- LLM/guides/SUBPLAN-WORKFLOW-GUIDE.md - Core workflow (4 phases)
- LLM/QUICK-START.md - Fast-track introduction
- Core Libraries: See
LLM/core/libraries/*/README.mdin each module - Dashboard: See
LLM/dashboard/README.md(architecture, components) - Scripts: See
LLM/scripts/README.md(automation tools) - Tests: See
LLM/tests/(232+ tests with examples)
- LLM/guides/GRAMMAPLAN-GUIDE.md - Multi-plan coordination
- LLM/guides/NORTH-STAR-GUIDE.md - Strategic vision documents
- LLM/guides/EXECUTION-ANALYSIS-GUIDE.md - Strategic analysis
- work-space/analyses/ - 125+ real-world analyses
The project includes comprehensive test coverage:
# Run all tests
python -m pytest LLM/tests/ -v
# Run dashboard tests (232+ tests)
python -m pytest LLM/tests/dashboard/ -v
# Run script tests
python -m pytest LLM/tests/scripts/ -v
# Run specific test file
python -m pytest LLM/tests/dashboard/test_plan_dashboard.py -v
# Check coverage
python -m pytest LLM/tests/ --cov=LLM --cov-report=htmlTest Statistics:
- Total Tests: 280+ tests
- Dashboard Tests: 232+ tests (100% pass rate)
- Script Tests: 48+ tests
- Coverage: >90% for core libraries
# 1. Check active work
cat work-space/plans/*/PLAN_*.md
# 2. Create new PLAN
cp LLM/templates/PLAN-TEMPLATE.md work-space/plans/PLAN_MY-FEATURE.md
# 3. Follow start protocol
cat LLM/protocols/IMPLEMENTATION_START_POINT.md# 1. Launch dashboard to see status
python LLM/main.py
# 2. Navigate to plan, see next achievements
# 3. Generate prompts for next achievement
python LLM/scripts/generation/generate_prompt.py @MY-PLAN --achievement 2.1 --interactive# 1. Follow review instructions in EXECUTION_TASK
# 2. Create feedback file
# If approved: work-space/plans/MY-PLAN/execution/feedbacks/APPROVED_21.md
# If fixes needed: work-space/plans/MY-PLAN/execution/feedbacks/FIX_21.md
# 3. Dashboard automatically updates status# Manual archive (on-demand)
python LLM/scripts/archiving/manual_archive.py --plan MY-PLAN
# Or follow end protocol
cat LLM/protocols/IMPLEMENTATION_END_POINT.md- North Stars: 4 strategic visions
- GrammaPlans: 6 coordination plans
- Active Plans: 17+ feature/project plans
- Active SUBPLANs: 30+ achievement designs
- Active EXECUTION_TASKs: 31+ work units
- Archived Work: 100+ completed documents
- Core Libraries: 13 production-ready modules
- Dashboard Components: 17 UI/logic modules
- Scripts: 50+ automation tools
- Tests: 280+ comprehensive tests
- Documentation: 100+ guides, templates, protocols
- Real-Time Updates: Auto-refresh after actions
- Parallel Detection: Automatic dependency analysis
- Health Scores: 5 component metrics (0-100)
- Multi-Instance Detection: Safe concurrent access
- Theme Support: 3 color schemes (default, dark, light)
Edit LLM/dashboard/config.yaml:
theme: default # default, dark, light
refresh_interval: 1 # seconds (1-60)
show_stats: true # show quick stats section
show_parallel: true # show parallel opportunities
auto_copy_commands: false # auto-copy commands to clipboardOr use interactive settings menu:
python LLM/main.py
# Press 's' for settingsConfigure in your scripts:
from LLM.core.libraries.logging import setup_logging, get_logger
# Setup logging
setup_logging(
log_level="INFO",
log_file="my_app.log",
format="json" # or "colored", "compact"
)
# Get logger
logger = get_logger(__name__)
logger.info("Application started", extra={"version": "1.0"})from LLM.core.libraries.metrics import Counter, Histogram, MetricRegistry
# Define metrics
requests_total = Counter(
"requests_total",
"Total requests",
labels=["method", "status"]
)
# Register
registry = MetricRegistry.get_instance()
registry.register(requests_total)
# Use
requests_total.inc(labels={"method": "GET", "status": "200"})
# Export (Prometheus format)
from LLM.core.libraries.metrics import export_prometheus_text
print(export_prometheus_text())# Install development dependencies
pip install -r requirements.txt
pip install pytest pytest-cov
# Run tests
python -m pytest LLM/tests/ -v
# Check linting
# (project uses consistent style, no explicit linter config)- Create module in
LLM/core/libraries/YOUR_LIBRARY/ - Add
__init__.pywith public API - Write comprehensive tests (>90% coverage)
- Document with examples
- Update
LLM/core/libraries/README.md
- Review
LLM/dashboard/README.mdfor architecture - Add feature to appropriate module
- Write tests in
LLM/tests/dashboard/ - Update
LLM/dashboard/metrics.pyfor tracking - Document in
LLM/docs/
- Methodology Questions: See LLM-METHODOLOGY.md
- Dashboard Help: Press
hin dashboard or see LLM/dashboard/README.md - Script Usage: See LLM/scripts/README.md
- Templates: See LLM/templates/PROMPTS.md
Dashboard doesn't start:
# Check dependencies
pip install -r requirements.txt
# Check Python version
python --version # Should be 3.8+
# Check error logs
python LLM/main.py 2>&1 | tee debug.logTests failing:
# Run specific test for details
python -m pytest LLM/tests/path/to/test.py -v -s
# Check imports
python -c "from LLM.dashboard import plan_dashboard"State not updating:
# Manual refresh in dashboard
# Press 'r' to refresh state
# Or clear lock file
rm LLM/dashboard/.dashboard.lock[Specify your license here]
Built with:
- Rich - Beautiful terminal UI
- Pydantic - Data validation
- PyMongo - MongoDB driver
- OpenAI Python - LLM integration
- β 5-tier hierarchy (added NORTH_STAR, GRAMMAPLAN)
- β Interactive dashboard with real-time updates
- β Parallel execution support
- β Production-ready core libraries
- β Comprehensive testing (280+ tests)
- 4-tier hierarchy (PLAN β SUBPLAN β EXECUTION)
- Command-line tools
- Basic libraries
Last Updated: November 15, 2025
Version: 2.0
Status: π Production Ready
For detailed version changes, see LLM/METHODOLOGY-EVOLUTION-v2.0.md