A LangGraph + Streamlit blog generation app that plans, researches, writes, and exports technical blog posts.
- Builds a structured blog plan with section tasks and writing constraints.
- Optionally performs web research with Tavily and normalizes evidence.
- Writes each section with an LLM worker node.
- Merges sections into final markdown.
- Optionally decides and generates technical images.
- Saves output markdown to generated_blogs/ and images to generated_blogs/images/.
- Provides a Streamlit UI with multi-chat history and result tabs.
- blogAgent.py: LangGraph backend pipeline, Pydantic schemas, Tavily research, planning/writing, image generation, and app graph compile.
- frontend.py: Streamlit UI, multi-chat sessions, invoke flow, tabs for Plan/Evidence/Markdown/Images/Logs.
- requirement.txt: Python dependencies.
- generated_blogs/: Generated markdown outputs.
- generated_blogs/images/: Generated image assets.
- .env: Environment variables for API keys (not committed).
Main graph:
- router
- research (conditional)
- orachestrator
- worker (fanout for section tasks)
- reducer subgraph
Reducer subgraph:
- merge_content
- decide_images
- generate_andplace_image
Output state includes final markdown and intermediate artifacts like plan, evidence, and image specs.
- Python 3.10+ recommended
- API keys:
- MISTRAL_API_KEY (required for ChatMistralAI)
- TAVILY_API_KEY (required for web research mode)
- GOOGLE_API_KEY (optional, used only when image generation is requested)
Windows PowerShell:
python -m venv venv
(Set-ExecutionPolicy -Scope Process -ExecutionPolicy RemoteSigned) ; (& .\venv\Scripts\Activate.ps1)pip install -r requirement.txtIf your requirements file encoding causes issues, open and save requirement.txt as UTF-8, then re-run install.
Create a .env file in the project root:
MISTRAL_API_KEY=your_mistral_key
TAVILY_API_KEY=your_tavily_key
GOOGLE_API_KEY=your_google_keyNotes:
- GOOGLE_API_KEY is optional.
- If image generation fails, the app still produces markdown and inserts a failure note where image placeholders exist.
Start Streamlit:
streamlit run frontend.pyOpen the local URL shown in terminal.
Sidebar:
- Chats: Create a new chat and switch between conversations.
- Generate:
- Topic input
- As-of date
- Generate Blog button
Main area:
- Chat History
- Tabs:
- Plan
- Evidence
- Markdown Preview
- Images
- Logs
Behavior:
- Backend runs once per Generate click.
- Generate button is disabled while a request is in progress.
- Each chat stores its own output/history/logs.
- Markdown files: generated_blogs/<safe_title>.md
- Image files: generated_blogs/images/
- Markdown image links are written as images/ so they resolve from generated_blogs context.
You can call the compiled graph directly:
from blogAgent import app
out = app.invoke({
"topic": "Your topic",
"mode": "",
"needs_research": False,
"queries": [],
"evidence": [],
"plan": None,
"as_of": "2026-04-19",
"recency_days": 7,
"sections": [],
"merged_md": "",
"md_with_placeholders": "",
"image_specs": [],
"final": "",
})
print(out.get("final", ""))There is also a helper:
from blogAgent import run
print(run("Topic")["final"])- Ensure TAVILY_API_KEY is set.
- The backend includes a deterministic fallback when LLM research JSON parsing fails.
- Source is derived from URL domain.
- If a result has no valid URL, it is skipped.
- Task schema requires:
- requires_research
- require_citations
- require_code
- Planning prompt explicitly instructs the model to set these.
- Check GOOGLE_API_KEY.
- Image generation errors are handled gracefully and do not block markdown output.
- The filename is requirement.txt (singular), not requirements.txt.
- The output folder generated_blogs/ is git-ignored in this repo.