An AI-powered blog generation system using LangGraph, Langchain, and Groq to research and write high-quality technical blog posts. The agent utilizes Tavily for real-time web research and operates in an interactive Streamlit application.
- Multi-agent LangGraph Workflow: Dynamically routes between research and orchestration modes based on topic requirements.
- Automated Web Research: Leverages Tavily Search to gather up-to-date evidence and context.
- Dynamic Planning: An Orchestrator node creates a structured plan with specific sections, goals, and target word counts.
- Parallel Processing: Fan-out architecture delegates section writing to parallel worker nodes for faster generation.
- Interactive UI: A Streamlit interface to input topics, watch the agent's real-time execution log (including queries, evidence, and plans), and view/download generated Markdown blogs.
The system operates on a state graph defined by the following Mermaid diagram:
graph TD
User([User Input: Topic]) --> Router[Router Node: Determine Mode]
Router -->|needs_research = true| Research[Research Node: Tavily Search]
Router -->|needs_research = false| Orchestrator[Orchestrator Node: Planning]
Research --> Orchestrator
Orchestrator -->|fanout tasks| Worker[Worker Node: Section Writing]
Worker --> Reducer[Reducer Node: Assembly]
Reducer --> Output([Generated Markdown Blog])
- Python >= 3.12
- Groq API Key for LLM inference (using models like
llama-3.3-70b-versatile). - Tavily API Key for automated web research.
This project uses uv for fast dependency management, but standard pip can also be used.
-
Clone the repository:
git clone https://github.com/yourusername/langgraph-blog-agent.git cd langgraph-blog-agent -
Environment Variables: Create a
.envfile in the root directory and add your API keys:GROQ_API_KEY=your_groq_api_key_here TAVILY_API_KEY=your_tavily_api_key_here
-
Install dependencies: If using
uv:uv pip install -r pyproject.toml
Or using standard
pip:pip install dotenv ipykernel langchain-groq langchain-tavily langgraph streamlit
Start the Streamlit application:
streamlit run app.py- Open your browser to the URL provided by Streamlit (usually
http://localhost:8501). - Navigate to the 🚀 Generate New Blog tab.
- Enter a topic (e.g., "The Future of Multi-Agent Systems") and click "Generate Blog".
- Watch the agent's execution log as it routes, researches, plans, and writes.
- Go to the 📚 View & Manage Blogs tab to read or download the final Markdown file.
app.py: The main Streamlit web interface and agent execution visualizer.src/: Core agent logic.graph.py: LangGraph state graph definition mapping nodes and edges.nodes.py: Implementation of individual agent nodes (Router, Research, Orchestrator, Worker, Reducer).schemas.py: Pydantic models for structured LLM outputs and Agent State.prompts.py: System prompts for guiding the LLM at each node stage.
generated_blogs/: Output directory where the final Markdown blog posts are saved.