A multi-agent AI system built with LangGraph and FastAPI that intelligently routes queries to specialized agents for weather information, document Q&A, meeting scheduling, and database queries.
- Smart Query Routing: Automatically classifies user queries and routes them to the appropriate agent
- Weather Agent: Fetches real-time weather data using OpenWeather API
- Document Q&A Agent: Processes PDF documents and answers questions using RAG (Retrieval-Augmented Generation)
- Meeting Scheduler: Intelligently schedules meetings based on weather conditions
- Database Agent: Executes natural language queries against a meetings database
- RESTful API: FastAPI endpoints for document upload and chat queries
- Error Handling: Comprehensive error handling and fallback mechanisms
- Vector Store: Uses Chroma with HuggingFace embeddings for document retrieval
- Python 3.12 or higher
- Conda (for environment management)
- GROQ API Key (for LLM access)
- OpenWeather API Key (for weather data)
conda create -n venv_py312 python=3.12 -y
conda activate venv_py312pip install -r requirements.txtCreate a .env file in the project root:
GROQ_API_KEY=your_groq_api_key_here
OPENWEATHER_API_KEY=your_openweather_api_key_hereGet your keys from:
agentic-backend/
├── main.py # FastAPI application entry point
├── agent_graph.py # LangGraph workflow and agent definitions
├── tool.py # Tool definitions (weather, web search)
├── rag.py # RAG pipeline for document processing
├── database.py # SQLAlchemy database models
├── meeting.py # Meeting management utilities
├── requirements.txt # Python dependencies
├── .env # Environment variables (not in repo)
├── meetings.db # SQLite database (auto-created)
└── README.md # This file
POST /upload
Upload a PDF document for processing.
Request:
curl -X POST "http://localhost:8000/upload" \
-F "file=@document.pdf"Response:
{
"message": "Document processed successfully."
}POST /chat
Send a query to the agentic system.
Request:
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{"query": "What is the weather in Chennai?"}'Response:
{
"response": "Weather in Chennai: clear sky, Temp: 28.5°C"
}Query: "What's the weather in New York?"
Response: Agent routes to weather_worker → fetches real-time data
Query: "What are the company policies?"
Response: Agent routes to doc_worker → searches uploaded documents
Query: "Schedule a meeting in London"
Response: Agent routes to scheduler_worker → checks weather → recommends scheduling
Query: "Show me all meetings tomorrow"
Response: Agent routes to db_worker → executes SQL → returns results
User Query
↓
Router Agent (Classification)
↓
├─→ Weather Agent
├─→ Document QA Agent
├─→ Meeting Scheduler
└─→ Database Query Agent
↓
Response
- Router Node: Classifies incoming queries into 4 categories
- Weather Agent: Extracts city, fetches weather using OpenWeather API
- Document QA Agent: Uses Chroma vector store for similarity search, falls back to web search
- Scheduler Agent: Extracts location, checks weather, provides scheduling recommendation
- Database Agent: Uses SQL agent to convert natural language to SQL queries
Currently using llama-3.3-70b-versatile from Groq. To change:
Edit agent_graph.py:
llm = ChatGroq(
temperature=0,
model_name="your_model_here",
api_key=os.getenv("GROQ_API_KEY")
)Document embedding uses HuggingFace's all-MiniLM-L6-v2 model. Change in rag.py:
embeddings = HuggingFaceEmbeddings(model_name="your_model_here")python main.pyThe API will be available at http://localhost:8000
Access Swagger documentation at http://localhost:8000/docs
CREATE TABLE meetings (
id INTEGER PRIMARY KEY,
title VARCHAR,
start_time VARCHAR,
description VARCHAR
);Use meeting.py utilities:
create_meeting(): Add a new meetingget_all_meetings(): List all meetingsupdate_meeting(): Modify meeting detailsdelete_meeting(): Remove a meetingsearch_meetings(): Search by title or description
- Verify
.envfile exists in project root - Check API keys are correct and active
- Ensure keys have sufficient quota
If receiving "model decommissioned" error:
- Check Groq deprecations
- Update model name in
agent_graph.py - Restart the application
- Ensure PDF file is valid
- Check file size (recommended < 50MB)
- Verify write permissions in project directory
- SQLite database creates automatically on first run
- Check for
meetings.dbin project root - Ensure no file permission issues
- Verify OpenWeather API key is valid
- Check city name spelling
- Rate limit: Free tier allows 60 calls/minute
- FastAPI: Web framework
- Uvicorn: ASGI server
- LangChain: LLM framework
- LangGraph: Workflow orchestration
- Groq: LLM provider
- Chroma: Vector database
- HuggingFace: Embeddings
- SQLAlchemy: ORM
- Python-dotenv: Environment management
- Caching: LangChain caches LLM responses by default
- Batch Queries: Process multiple queries sequentially for better throughput
- Document Size: Smaller PDFs process faster
- Chunking: Adjust chunk size in
rag.pyfor better accuracy
- Multi-language support
- Real-time streaming responses
- Advanced caching strategies
- Message history tracking
- User authentication
- Rate limiting
- Monitoring and logging dashboard
- Docker containerization
MIT
For issues or questions, check:
Last Updated: January 21, 2026