This project demonstrates a complete LangGraph-based multi-agent system with MongoDB persistence, ChromaDB vector storage, and support for multiple AI providers (Google Gemini and OpenRouter).
- 🤖 Multi-Agent System: LangGraph workflow with planner → executor → reviewer → fixer auto-iteration
- 💾 Persistent Storage: MongoDB with Prisma for conversations and project data
- 🧠 Vector Memory: ChromaDB integration for long-term project context and code search
- 🔄 Auto-Iteration: Automatic retry and fix cycles when code generation fails
- 🎯 18 Advanced Tools: File operations, validation, optimization, and more
- 📱 Chat Interface: Modern UI with project management and session persistence
- 🐍 Python Backend Option: FastAPI backend with MCP (Model Context Protocol) integration
-
Install dependencies:
pnpm install
-
Set up environment variables:
cp .env.example .env # Edit .env with your API keys -
Start MongoDB (if using local instance):
mongod
-
Run the application:
pnpm run dev
-
Install Python dependencies:
cd Pythonagents/fastapi-mcp-agent python3 -m venv venv source venv/bin/activate pip install -r requirements.txt cd ../..
-
Set up environment variables:
cp .env.example .env # Edit .env and set BACKEND_TYPE=python # Add your API keys (GROQ_API_KEY, GEMINI_API_KEY, etc.)
-
Run with Python backend:
pnpm run dev:python
Or manually:
# Terminal 1: Start Python backend cd Pythonagents/fastapi-mcp-agent source venv/bin/activate uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload # Terminal 2: Start Next.js frontend npm run dev
# Build and run both services
docker-compose up --build
# Or run individually
docker-compose up frontend
docker-compose up backendThe system automatically selects the best available AI provider:
- Environment Variable:
GEMINI_API_KEY - Get API Key: Visit Google AI Studio
- Advantages: No rate limiting, high quality, cost-effective
- Default Model:
gemini-1.5-pro
- Environment Variable:
OPENROUTER_API_KEY - Get API Key: Visit OpenRouter
- Note: May experience rate limiting with free tier
The system automatically selects the best available AI provider:
- Environment Variable:
GEMINI_API_KEY - Get API Key: Visit Google AI Studio
- Advantages: No rate limiting, high quality, cost-effective
- Default Model:
gemini-1.5-pro
- Environment Variable:
OPENROUTER_API_KEY - Get API Key: Visit OpenRouter
- Note: May experience rate limiting with free tier
- If
GEMINI_API_KEYis set → Uses Google Gemini - Else if
OPENROUTER_API_KEYis set → Uses OpenRouter - Otherwise → Throws error requiring API key
# AI Provider (choose one)
GEMINI_API_KEY=your-gemini-api-key-here
# OR
OPENROUTER_API_KEY=your-openrouter-key-here
# Database
DATABASE_URL="mongodb://localhost:27017/nextlovable"
# Optional overrides
GEMINI_MODEL=gemini-1.5-pro
OPENROUTER_MODEL=openai/gpt-4o- Frontend: Next.js 14 with React and Tailwind CSS
- Backend: Next.js API routes with streaming responses
- Database: MongoDB with Prisma ORM
- Vector Store: ChromaDB with LangChain integration
- AI Framework: LangChain + LangGraph for agent orchestration
- State Management: Custom memory system with conversation persistence
# Install dependencies
pnpm install
# Run in development mode
pnpm run dev
# Build for production
pnpm run build
# Run linting
pnpm run lint├── app/ # Next.js app directory
│ ├── api/ # API routes
│ │ ├── chat/ # Main chat endpoint
│ │ └── sessions/ # Session management
│ └── page.tsx # Main chat interface
├── components/ # React components
│ └── ChatSidebar.tsx # Project management sidebar
├── lib/ # Core libraries
│ ├── agents/ # LangGraph agent definitions
│ ├── db/ # Database utilities
│ └── utils/ # Helper functions
├── prisma/ # Database schema
└── src/ # Source code
└── agents/ # Agent implementations