Skip to content

FernandoBDAF/context-manager

Repository files navigation

LLM Methodology: Structured Development Framework

Version: 2.0
Status: πŸš€ Production Ready
Last Updated: November 15, 2025

A comprehensive framework for structured, LLM-assisted software development with real-time progress tracking, parallel execution support, and production-ready libraries.


🎯 Overview

The LLM Methodology is a battle-tested framework for managing complex software development with AI assistance. It provides:

  • 5-Tier Planning Hierarchy: From strategic vision (NORTH_STAR) to executable tasks (EXECUTION_TASK)
  • Real-Time Dashboard: Rich terminal UI for tracking progress across multiple plans
  • Production Libraries: Logging, metrics, error handling, caching, and more
  • Parallel Execution: Intelligent dependency analysis and batch execution
  • Automated Workflows: Scripts for generation, validation, and archiving

Perfect for teams building complex systems with LLM assistance, or solo developers managing ambitious projects.


✨ Key Features

πŸ“Š Interactive Dashboard

python LLM/main.py

Rich terminal dashboard showing:

  • Active Plans: Visual progress tracking (11/18 achievements, 61%)
  • Parallel Opportunities: Automatic detection with time savings (67% faster)
  • Real-Time Updates: Filesystem monitoring with auto-refresh
  • Quick Actions: One-key shortcuts for common operations
  • Health Scores: Plan health metrics (0-100) with component breakdown

πŸ“ 5-Tier Planning Hierarchy

NORTH_STAR (800-2,000 lines)    Strategic vision, principles
    ↓
GRAMMAPLAN (600-1,500 lines)    Coordinates 4-6 PLANs
    ↓
PLAN (300-900 lines)            Feature/project scope
    ↓
SUBPLAN (200-600 lines)         Single achievement design
    ↓
EXECUTION_TASK (<200 lines)     Executable work unit

Each tier has:

  • Size Limits: Enforced via validation scripts
  • Templates: Copy-paste ready structures
  • Protocols: Step-by-step workflows
  • Guides: Comprehensive how-tos

πŸ”„ Parallel Execution

Automatically detect and execute independent achievements in parallel:

# Dashboard shows parallel opportunities
Level 0 Group (3 achievements):
  β”œβ”€ 3.1: Performance Optimization
  β”œβ”€ 3.2: Error Handling Enhancement
  └─ 3.3: Documentation Update
  
  Time: 3.5h parallel vs 10.5h sequential
  Savings: 7.0h (67%)

Batch creation with dependency tracking:

python LLM/scripts/generation/batch_subplan.py --plan-path work-space/plans/MY-PLAN --level 0
python LLM/scripts/generation/batch_execution.py --plan-path work-space/plans/MY-PLAN --level 0

πŸ—οΈ Production-Ready Libraries

Core Libraries (LLM/core/libraries/):

  • Logging: Structured JSON logging with context, Loki integration
  • Metrics: Prometheus-compatible counters, histograms, gauges
  • Error Handling: Structured exceptions with context and suggestions
  • Retry: Exponential backoff with configurable policies
  • Caching: LRU cache with TTL and mtime-based invalidation
  • Validation: Rule-based validation with clear error messages
  • Serialization: Pydantic-based serialization with custom encoders
  • Rate Limiting: Token bucket rate limiter
  • Concurrency: Async execution with TPM tracking
  • Database: MongoDB operations with error handling

All libraries are:

  • βœ… Fully tested (>90% coverage)
  • βœ… Type-hinted
  • βœ… Documented with examples
  • βœ… Production-proven

πŸš€ Quick Start

Installation

# Clone repository
git clone <repository-url>
cd context_manager

# Install dependencies
pip install -r requirements.txt

Dependencies:

  • Python 3.8+
  • pydantic>=2.7.0 (data validation)
  • pymongo>=4.7.0 (MongoDB)
  • openai>=1.42.0 (LLM integration)
  • pyyaml>=6.0 (configuration)
  • rich>=13.0.0 (terminal UI)

Launch Dashboard

# Main dashboard (all plans)
python LLM/main.py

# Specific plan
python LLM/main.py --plan 1

Create Your First Plan

# 1. Read the methodology
cat LLM/LLM-METHODOLOGY.md

# 2. Use the template
cp LLM/templates/PLAN-TEMPLATE.md work-space/plans/PLAN_MY-FEATURE.md

# 3. Follow the start protocol
cat LLM/protocols/IMPLEMENTATION_START_POINT.md

Generate Prompts

# Interactive mode (auto-detects workflow state)
python LLM/scripts/generation/generate_prompt.py @MY-PLAN

# Create SUBPLAN for achievement 1.2
python LLM/scripts/generation/generate_prompt.py @MY-PLAN --achievement 1.2 --interactive

# Continue execution
python LLM/scripts/generation/generate_prompt.py continue @EXECUTION_TASK_MY-PLAN_12_01.md

πŸ“ Project Structure

context_manager/
β”œβ”€β”€ README.md                        # This file
β”œβ”€β”€ requirements.txt                 # Python dependencies
β”œβ”€β”€ LLM-METHODOLOGY.md              # Complete methodology reference
β”‚
β”œβ”€β”€ LLM/                            # Core framework
β”‚   β”œβ”€β”€ main.py                     # Dashboard entry point
β”‚   β”œβ”€β”€ README.md                   # LLM folder documentation
β”‚   β”‚
β”‚   β”œβ”€β”€ core/                       # Production libraries
β”‚   β”‚   └── libraries/              # Reusable libraries
β”‚   β”‚       β”œβ”€β”€ logging/            # Structured logging
β”‚   β”‚       β”œβ”€β”€ metrics/            # Prometheus metrics
β”‚   β”‚       β”œβ”€β”€ error_handling/     # Exception framework
β”‚   β”‚       β”œβ”€β”€ retry/              # Retry policies
β”‚   β”‚       β”œβ”€β”€ caching/            # LRU cache
β”‚   β”‚       β”œβ”€β”€ validation/         # Rule-based validation
β”‚   β”‚       β”œβ”€β”€ serialization/      # JSON serialization
β”‚   β”‚       β”œβ”€β”€ rate_limiting/      # Rate limiter
β”‚   β”‚       β”œβ”€β”€ concurrency/        # Async execution
β”‚   β”‚       └── database/           # MongoDB operations
β”‚   β”‚
β”‚   β”œβ”€β”€ dashboard/                  # Interactive dashboard
β”‚   β”‚   β”œβ”€β”€ main_dashboard.py      # Main dashboard UI
β”‚   β”‚   β”œβ”€β”€ plan_dashboard.py      # Plan-specific UI
β”‚   β”‚   β”œβ”€β”€ state_detector.py      # State analysis
β”‚   β”‚   β”œβ”€β”€ parallel_detector.py   # Parallel detection
β”‚   β”‚   β”œβ”€β”€ workflow_executor.py   # Action execution
β”‚   β”‚   └── metrics.py             # Dashboard metrics
β”‚   β”‚
β”‚   β”œβ”€β”€ scripts/                    # Automation tools
β”‚   β”‚   β”œβ”€β”€ generation/            # Prompt generation
β”‚   β”‚   β”‚   β”œβ”€β”€ generate_prompt.py         # Main orchestrator
β”‚   β”‚   β”‚   β”œβ”€β”€ batch_subplan.py          # Batch SUBPLAN creation
β”‚   β”‚   β”‚   └── batch_execution.py        # Batch EXECUTION creation
β”‚   β”‚   β”œβ”€β”€ validation/            # Size & structure validation
β”‚   β”‚   β”‚   β”œβ”€β”€ check_plan_size.py
β”‚   β”‚   β”‚   β”œβ”€β”€ validate_achievement_completion.py
β”‚   β”‚   β”‚   └── validate_subplan_executions.py
β”‚   β”‚   └── archiving/             # Archive completed work
β”‚   β”‚       └── manual_archive.py
β”‚   β”‚
β”‚   β”œβ”€β”€ templates/                  # Copy-paste templates
β”‚   β”‚   β”œβ”€β”€ NORTH_STAR-TEMPLATE.md
β”‚   β”‚   β”œβ”€β”€ GRAMMAPLAN-TEMPLATE.md
β”‚   β”‚   β”œβ”€β”€ PLAN-TEMPLATE.md
β”‚   β”‚   β”œβ”€β”€ SUBPLAN-TEMPLATE.md
β”‚   β”‚   β”œβ”€β”€ EXECUTION_TASK-TEMPLATE.md
β”‚   β”‚   └── PROMPTS.md             # Ready-to-use prompts
β”‚   β”‚
β”‚   β”œβ”€β”€ guides/                     # How-to guides
β”‚   β”‚   β”œβ”€β”€ NORTH-STAR-GUIDE.md
β”‚   β”‚   β”œβ”€β”€ GRAMMAPLAN-GUIDE.md
β”‚   β”‚   β”œβ”€β”€ SUBPLAN-WORKFLOW-GUIDE.md
β”‚   β”‚   └── FOCUS-RULES.md
β”‚   β”‚
β”‚   β”œβ”€β”€ protocols/                  # Workflow protocols
β”‚   β”‚   β”œβ”€β”€ IMPLEMENTATION_START_POINT.md
β”‚   β”‚   β”œβ”€β”€ CREATE_SUBPLAN.md
β”‚   β”‚   β”œβ”€β”€ CREATE_EXECUTION.md
β”‚   β”‚   β”œβ”€β”€ IMPLEMENTATION_RESUME.md
β”‚   β”‚   └── IMPLEMENTATION_END_POINT.md
β”‚   β”‚
β”‚   β”œβ”€β”€ tests/                      # Comprehensive tests
β”‚   β”‚   β”œβ”€β”€ dashboard/             # Dashboard tests (232+ tests)
β”‚   β”‚   └── scripts/               # Script tests
β”‚   β”‚
β”‚   └── docs/                       # Additional documentation
β”‚       β”œβ”€β”€ ERROR_HANDLING_PATTERNS.md
β”‚       β”œβ”€β”€ PERFORMANCE_OPTIMIZATION_GUIDE.md
β”‚       └── FEEDBACK_SYSTEM_GUIDE.md
β”‚
└── work-space/                     # Active work directory
    β”œβ”€β”€ north-stars/               # Strategic vision documents
    β”œβ”€β”€ grammaplans/               # Multi-plan coordination
    β”œβ”€β”€ plans/                     # Active plans (17+)
    β”‚   └── PLAN_NAME/
    β”‚       β”œβ”€β”€ PLAN_NAME.md
    β”‚       β”œβ”€β”€ subplans/         # Achievement designs
    β”‚       β”œβ”€β”€ execution/        # Execution tasks
    β”‚       β”‚   └── feedbacks/    # APPROVED/FIX files
    β”‚       └── parallel.json     # Parallel execution config
    β”œβ”€β”€ analyses/                  # Strategic analyses (125+)
    β”œβ”€β”€ knowledge/                 # Learnings & patterns
    └── archive/                   # Completed work

πŸŽ“ Learning Path

1. Understand the Methodology (30 min)

Start with the core concepts:

# Read the methodology overview
cat LLM-METHODOLOGY.md

# Understand the 4-phase workflow
cat LLM/guides/SUBPLAN-WORKFLOW-GUIDE.md

Key Concepts:

  • 5-tier hierarchy (NORTH_STAR β†’ EXECUTION_TASK)
  • 4-phase workflow (Design β†’ Plan β†’ Execute β†’ Synthesize)
  • Context budgets (what each agent reads)
  • Filesystem-first tracking (no manual updates)

2. Explore the Dashboard (15 min)

# Launch and explore
python LLM/main.py

# Try different views
# - Main dashboard (all plans)
# - Plan dashboard (detailed view)
# - Actions menu (execute, create, review)

Dashboard Features:

  • Press 1-6 to select actions
  • Press r to refresh state
  • Press b to go back
  • Press s for settings (themes, auto-copy)

3. Create a Sample Plan (1 hour)

Follow the complete workflow:

# 1. Copy template
cp LLM/templates/PLAN-TEMPLATE.md work-space/plans/PLAN_SAMPLE-FEATURE.md

# 2. Edit plan (define 3-5 achievements)
# Add Achievement Index with clear goals

# 3. Create SUBPLAN for first achievement
python LLM/scripts/generation/generate_prompt.py @SAMPLE-FEATURE --achievement 1.1 --interactive

# 4. Review SUBPLAN in work-space/plans/SAMPLE-FEATURE/subplans/

# 5. Create EXECUTION_TASK
python LLM/scripts/generation/generate_prompt.py @SAMPLE-FEATURE --achievement 1.1 --continue

# 6. Execute work following EXECUTION_TASK instructions

# 7. Request review (create APPROVED or FIX feedback file)

4. Try Parallel Execution (30 min)

# 1. Create parallel.json for your plan
cat work-space/plans/PARALLEL-EXECUTION-AUTOMATION/parallel.json

# 2. Define achievements with dependencies
{
  "plan": "SAMPLE-FEATURE",
  "achievements": [
    {"id": "2.1", "dependencies": []},
    {"id": "2.2", "dependencies": []},
    {"id": "2.3", "dependencies": ["2.1", "2.2"]}
  ]
}

# 3. Dashboard shows parallel opportunities automatically
python LLM/main.py --plan SAMPLE-FEATURE

# 4. Execute parallel group
# Select action 2 (Execute Parallel Group)

πŸ“š Documentation

For New Users

  1. LLM-METHODOLOGY.md - Start here (complete reference)
  2. LLM/README.md - LLM folder structure and contents
  3. LLM/guides/SUBPLAN-WORKFLOW-GUIDE.md - Core workflow (4 phases)
  4. LLM/QUICK-START.md - Fast-track introduction

For Developers

  • Core Libraries: See LLM/core/libraries/*/README.md in each module
  • Dashboard: See LLM/dashboard/README.md (architecture, components)
  • Scripts: See LLM/scripts/README.md (automation tools)
  • Tests: See LLM/tests/ (232+ tests with examples)

For Advanced Users


πŸ§ͺ Testing

The project includes comprehensive test coverage:

# Run all tests
python -m pytest LLM/tests/ -v

# Run dashboard tests (232+ tests)
python -m pytest LLM/tests/dashboard/ -v

# Run script tests
python -m pytest LLM/tests/scripts/ -v

# Run specific test file
python -m pytest LLM/tests/dashboard/test_plan_dashboard.py -v

# Check coverage
python -m pytest LLM/tests/ --cov=LLM --cov-report=html

Test Statistics:

  • Total Tests: 280+ tests
  • Dashboard Tests: 232+ tests (100% pass rate)
  • Script Tests: 48+ tests
  • Coverage: >90% for core libraries

πŸ› οΈ Common Workflows

Starting New Work

# 1. Check active work
cat work-space/plans/*/PLAN_*.md

# 2. Create new PLAN
cp LLM/templates/PLAN-TEMPLATE.md work-space/plans/PLAN_MY-FEATURE.md

# 3. Follow start protocol
cat LLM/protocols/IMPLEMENTATION_START_POINT.md

Continuing Existing Work

# 1. Launch dashboard to see status
python LLM/main.py

# 2. Navigate to plan, see next achievements

# 3. Generate prompts for next achievement
python LLM/scripts/generation/generate_prompt.py @MY-PLAN --achievement 2.1 --interactive

Reviewing Achievement

# 1. Follow review instructions in EXECUTION_TASK

# 2. Create feedback file
# If approved: work-space/plans/MY-PLAN/execution/feedbacks/APPROVED_21.md
# If fixes needed: work-space/plans/MY-PLAN/execution/feedbacks/FIX_21.md

# 3. Dashboard automatically updates status

Archiving Completed Work

# Manual archive (on-demand)
python LLM/scripts/archiving/manual_archive.py --plan MY-PLAN

# Or follow end protocol
cat LLM/protocols/IMPLEMENTATION_END_POINT.md

πŸ“Š Statistics & Metrics

Active Work

  • North Stars: 4 strategic visions
  • GrammaPlans: 6 coordination plans
  • Active Plans: 17+ feature/project plans
  • Active SUBPLANs: 30+ achievement designs
  • Active EXECUTION_TASKs: 31+ work units
  • Archived Work: 100+ completed documents

Code Metrics

  • Core Libraries: 13 production-ready modules
  • Dashboard Components: 17 UI/logic modules
  • Scripts: 50+ automation tools
  • Tests: 280+ comprehensive tests
  • Documentation: 100+ guides, templates, protocols

Dashboard Features

  • Real-Time Updates: Auto-refresh after actions
  • Parallel Detection: Automatic dependency analysis
  • Health Scores: 5 component metrics (0-100)
  • Multi-Instance Detection: Safe concurrent access
  • Theme Support: 3 color schemes (default, dark, light)

πŸ”§ Configuration

Dashboard Settings

Edit LLM/dashboard/config.yaml:

theme: default              # default, dark, light
refresh_interval: 1         # seconds (1-60)
show_stats: true           # show quick stats section
show_parallel: true        # show parallel opportunities
auto_copy_commands: false  # auto-copy commands to clipboard

Or use interactive settings menu:

python LLM/main.py
# Press 's' for settings

Logging

Configure in your scripts:

from LLM.core.libraries.logging import setup_logging, get_logger

# Setup logging
setup_logging(
    log_level="INFO",
    log_file="my_app.log",
    format="json"  # or "colored", "compact"
)

# Get logger
logger = get_logger(__name__)
logger.info("Application started", extra={"version": "1.0"})

Metrics

from LLM.core.libraries.metrics import Counter, Histogram, MetricRegistry

# Define metrics
requests_total = Counter(
    "requests_total",
    "Total requests",
    labels=["method", "status"]
)

# Register
registry = MetricRegistry.get_instance()
registry.register(requests_total)

# Use
requests_total.inc(labels={"method": "GET", "status": "200"})

# Export (Prometheus format)
from LLM.core.libraries.metrics import export_prometheus_text
print(export_prometheus_text())

🀝 Contributing

Development Setup

# Install development dependencies
pip install -r requirements.txt
pip install pytest pytest-cov

# Run tests
python -m pytest LLM/tests/ -v

# Check linting
# (project uses consistent style, no explicit linter config)

Adding New Libraries

  1. Create module in LLM/core/libraries/YOUR_LIBRARY/
  2. Add __init__.py with public API
  3. Write comprehensive tests (>90% coverage)
  4. Document with examples
  5. Update LLM/core/libraries/README.md

Adding Dashboard Features

  1. Review LLM/dashboard/README.md for architecture
  2. Add feature to appropriate module
  3. Write tests in LLM/tests/dashboard/
  4. Update LLM/dashboard/metrics.py for tracking
  5. Document in LLM/docs/

πŸ“ž Support & Resources

Need Help?

Troubleshooting

Dashboard doesn't start:

# Check dependencies
pip install -r requirements.txt

# Check Python version
python --version  # Should be 3.8+

# Check error logs
python LLM/main.py 2>&1 | tee debug.log

Tests failing:

# Run specific test for details
python -m pytest LLM/tests/path/to/test.py -v -s

# Check imports
python -c "from LLM.dashboard import plan_dashboard"

State not updating:

# Manual refresh in dashboard
# Press 'r' to refresh state

# Or clear lock file
rm LLM/dashboard/.dashboard.lock

πŸ“ License

[Specify your license here]


πŸ™ Acknowledgments

Built with:


πŸ“ˆ Version History

v2.0 (November 2025) - Current

  • βœ… 5-tier hierarchy (added NORTH_STAR, GRAMMAPLAN)
  • βœ… Interactive dashboard with real-time updates
  • βœ… Parallel execution support
  • βœ… Production-ready core libraries
  • βœ… Comprehensive testing (280+ tests)

v1.0 (Earlier)

  • 4-tier hierarchy (PLAN β†’ SUBPLAN β†’ EXECUTION)
  • Command-line tools
  • Basic libraries

Last Updated: November 15, 2025
Version: 2.0
Status: πŸš€ Production Ready

For detailed version changes, see LLM/METHODOLOGY-EVOLUTION-v2.0.md

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published