An intelligent workflow system that generates structured implementation plans from natural language requests using AI and repository context analysis.
- π€ AI-Powered Planning: Converts natural language requests into detailed implementation plans
- π Structured Output: Generates JSON plans with phases, file targets, and validation steps
- π Repository Integration: Automatically fetches and analyzes repository context via Gitea
- π Cross-Platform: Works on Windows, Linux, and macOS
- β‘ Easy Setup: Simple configuration and one-command execution
- π MCP Integration: Uses Model Context Protocol for efficient repository access
- πΎ Smart Caching: File-based caching for improved performance
User Input β Gitea-MCP β Context Builder β AI Model β Implementation Plan
- workflow/main.py: Main orchestrator and entry point
- workflow/trigger.py: Input parsing and validation
- workflow/gitea_connector.py: Gitea repository context fetching via MCP
- workflow/context_builder.py: Context combination and analysis
- workflow/model_interface.py: AI model interface and plan generation
- mcp/server.py: Gitea MCP server implementation
- mcp/client.py: Simplified MCP client for repository operations
# Clone and navigate to the project
cd workflow
# Install dependencies and create secure config templates
python setup.py
# Configure your credentials (IMPORTANT: Use local config files)
# Linux/Mac:
cp config/config.sh config/local_config.sh
nano config/local_config.sh # Edit with your actual credentials
source config/local_config.sh
# Windows:
copy config\config.bat config\local_config.bat
notepad config\local_config.bat # Edit with your actual credentials
config\local_config.batπ Security Note: Never commit local_config.* files to version control. They contain your sensitive credentials and are automatically excluded by .gitignore.
cd workflow
python main.pyThe system will guide you through an interactive workflow:
- Select Project: Choose from available projects or enter custom
- Select Branch: Choose from common branches or enter custom
- Enter Query: Describe what you want to implement
- Confirm & Execute: Review and run the workflow
π Select a project:
1. Forge/maestro
2. Test/Oracle
3. Enter custom project...
Select option (1-3): 1
β
Selected project: Forge/maestro
π Select a branch:
1. main
2. master
3. develop
4. Enter custom branch...
Select option (1-4): 2
β
Selected branch: master
π€ Your request: Add dark mode toggle to the UI
π Proceed with workflow? (y/N): y
π IMPORTANT: Always use local configuration files for your credentials:
-
Create local config files:
# Linux/Mac cp config/config.sh config/local_config.sh # Windows copy config\config.bat config\local_config.bat
-
Edit with your actual credentials:
- GITEA_URL: Your Gitea instance URL (e.g.,
https://gitea.example.com) - GITEA_ACCESS_TOKEN: Personal access token from Gitea Settings > Applications
- GITMCP_SERVER_URL: Gitea-MCP server endpoint
- OPENAI_API_BASE_URL: OpenAI-compatible API endpoint
- GITEA_URL: Your Gitea instance URL (e.g.,
-
Alternative: Use .env file:
cp config/env.example .env nano .env # Edit with your credentials
- Never commit
local_config.*or.envfiles - Never edit the template files (
config.sh,config.bat) with real credentials - Use HTTPS URLs when possible
- Generate tokens with minimal required permissions
The system generates:
- plan.json: Structured implementation plan
- Phase-based breakdown: Clear steps with file targets and validation
- Context-aware suggestions: Based on actual repository analysis
- Python 3.8+ (3.11+ recommended)
- Git (recommended for repository operations)
- Access to Gitea instance (for repository context)
- OpenAI-compatible LLM server (for plan generation)
requests>=2.26.0 # HTTP requests for API interactions
python-dotenv>=0.19.0 # Environment variable management
pydantic>=1.8.0 # JSON handling and validation
GitPython>=3.1.0 # Git operations (optional)
langchain>=0.0.267 # LangChain framework
langchain-openai>=0.0.1 # OpenAI integration
langchain-core>=0.0.1 # LangChain core components
langchain-community # Community integrations (VLLMOpenAI)
mcp>=0.0.1 # Base MCP package
langchain-mcp-adapters>=0.0.1 # LangChain MCP integration
python-multipart>=0.0.6 # Multipart request support
aiohttp>=3.8.0 # Async HTTP server (for MCP server)
asyncio # Async programming support
pytest>=6.0.0 # Testing framework
pytest-cov>=2.0.0 # Coverage reporting
black>=22.0.0 # Code formatting
flake8>=4.0.0 # Code linting
mypy>=0.900 # Type checking
memory-profiler # Memory usage profiling
psutil # System monitoring
# Install all dependencies
pip install -r requirements.txt
# Or use the setup script
python setup.py# Core dependencies
pip install requests python-dotenv pydantic GitPython
# AI/LLM components
pip install langchain langchain-openai langchain-core langchain-community
# MCP components
pip install mcp langchain-mcp-adapters python-multipart
# Server components
pip install aiohttp# Install development dependencies
pip install pytest pytest-cov black flake8 mypy memory-profiler
# Or install from dev requirements
pip install -r requirements-dev.txt # If available- Gitea Server: Repository hosting and API access
- OpenAI-compatible LLM: For plan generation (e.g., OpenAI API, local models via vLLM, Ollama)
- Redis/Memcached: For advanced caching (future enhancement)
- PostgreSQL/MySQL: For persistent storage (future enhancement)
workflow/
βββ workflow/ # Core workflow components
β βββ __init__.py
β βββ main.py # Main entry point
β βββ trigger.py # User input handling
β βββ gitea_connector.py # Repository context fetching
β βββ context_builder.py # Context combination
β βββ model_interface.py # AI model interface
βββ mcp/ # MCP server and client
β βββ __init__.py
β βββ server.py # Gitea MCP server
β βββ client.py # Simplified MCP client
βββ utils/ # Utility modules
β βββ __init__.py
β βββ banner.py # UI utilities
β βββ context_optimizer.py
β βββ file_cache.py # Caching system
βββ config/ # Configuration files
β βββ config.sh # Linux/Mac config
β βββ config.bat # Windows config
βββ docs/ # Documentation
β βββ ARCHITECTURE.md # System architecture
β βββ DEVELOPMENT.md # Development guide
βββ examples/ # Example files
βββ tests/ # Test suite
βββ .github/ # GitHub workflows
βββ requirements.txt # Dependencies
βββ requirements-dev.txt # Development dependencies
βββ setup.py # Package setup
βββ .gitignore # Git ignore rules
βββ LICENSE # MIT License
βββ README.md # This file
# Start the MCP server in background
python mcp/server.py &
# Run workflow with existing server
cd workflow
python main.pyfrom workflow import WorkflowTrigger, GiteaMCPConnector, ContextBuilder, OpenAIModelInterface
# Initialize components
trigger = WorkflowTrigger()
connector = GiteaMCPConnector()
builder = ContextBuilder()
model = OpenAIModelInterface()
# Get user input
user_input = trigger.submit_issue("Forge/maestro,main,Add dark mode toggle")
# Fetch repository context
repo_context = connector.get_project_repository_context("Forge/maestro", "main")
# Build combined context
combined_context = builder.build_combined_context(user_input, repo_context)
# Generate plan
plan = model.generate_llm_template_and_send(combined_context)# Create custom config
cp config/config.sh my_config.sh
# Edit my_config.sh with your settings
source my_config.sh
# Run with custom config
cd workflow
python main.py-
MCP Server Connection Failed
# Check if server is running curl http://localhost:8080/version # Start server manually python mcp/server.py
-
Gitea Authentication Failed
# Verify token in config echo $GITEA_ACCESS_TOKEN # Test Gitea API access curl -H "Authorization: token $GITEA_ACCESS_TOKEN" $GITEA_URL/api/v1/user
-
AI Model Connection Failed
# Test model server curl $OPENAI_API_BASE_URL/models # Check model availability curl -X POST $OPENAI_API_BASE_URL/chat/completions \ -H "Content-Type: application/json" \ -d '{"model":"your-model","messages":[{"role":"user","content":"test"}]}'
# Enable debug logging
export PYTHONPATH=.
cd workflow
python -c "
import logging
logging.basicConfig(level=logging.DEBUG)
from main import main
main()
"- File Caching: Reduces API calls by 80-90% for repeated requests
- Context Optimization: Handles repositories up to 10,000 files efficiently
- Async Operations: MCP server supports concurrent requests
- Memory Efficient: Streaming file processing for large repositories
This project follows security best practices:
- π Credential Protection: All sensitive data in local config files (excluded from version control)
- π‘οΈ Input Validation: Sanitized user inputs and API responses
- π HTTPS Support: SSL/TLS validation for secure connections
- π Secure File Permissions: Restricted access to configuration files
- π« No Hardcoded Secrets: No credentials committed to repository
- β‘ Minimal Permissions: API tokens require only necessary access rights
- π Regular Updates: Dependencies monitored for security vulnerabilities
- Automatic
.gitignorerules for sensitive files - Cross-platform secure file permissions
- Environment variable validation
- SSL certificate verification
- Request timeout protection
- File size limits for uploads
π For detailed security guidelines, see docs/SECURITY.md
We welcome contributions! Please see our Development Guide for details.
# Fork and clone the repository
git clone https://github.com/your-username/workflow.git
cd workflow
# Create virtual environment
python -m venv venv
source venv/bin/activate # Linux/Mac
# or venv\Scripts\activate # Windows
# Install development dependencies
pip install -r requirements-dev.txt
# Run tests
pytest tests/
# Format code
black .
# Submit pull requestThis project is licensed under the MIT License - see the LICENSE file for details.