An AI-powered blog generation agent that creates structured technical posts from a user-provided topic. The project uses a LangGraph workflow orchestrating planning, optional web research, and section-level writing via LLMs (OpenAI and HuggingFace are supported).
This repository includes:
- A compiled graph workflow implemented in
main.ipynbthat orchestrates routing, planning, optional research, section generation, and final assembly. - A Streamlit frontend at
frontend.pythat provides a simple UI for entering a topic, running the workflow, and viewing/downloading results.
- Generates multi-section, structured blog posts with section goals, bullets, and target word counts.
- Supports three research modes:
closed_book,hybrid, andopen_book. - Uses Pydantic models for structured outputs (Plan, Task, EvidenceItem).
- Exports final post as Markdown and (optionally) PDF.
- Install Python 3.12+ and clone this repo.
- Create a virtual environment and activate it:
python -m venv .venv
# Windows
.venv\Scripts\Activate.ps1
# macOS / Linux
source .venv/bin/activate- Install dependencies (choose one):
# If you have a requirements.txt
pip install -r requirements.txt
# Or use `uv` (lockfile present as `uv.lock`):
uv install
# To add an optional package, e.g. reportlab for PDF export:
uv add reportlab- Add credentials in a
.envfile at the project root (only the keys you need):
OPENAI_API_KEY=sk-...
HUGGINGFACEHUB_API_TOKEN=hf_...
TAVILY_API_KEY=...- Run the Streamlit UI:
streamlit run frontend.pyOpen the UI and enter a topic in the sidebar, then click Generate.
If you prefer to call the workflow directly from Python (the notebook compiles a workflow object), you can do so from a script or REPL after importing or executing the notebook code. The workflow expects a state dict like:
state = {
"topic": "Your topic here",
"mode": "",
"needs_research": False,
"queries": [],
"evidence": [],
"plan": None,
"sections": [],
"final": "",
}
# then call
result = workflow.invoke(state)
# result["final"] contains the generated markdownSee main.ipynb for the full implementation and helper functions like run(topic).
main.ipynb— primary workflow notebook (defines nodes, compilesworkflow).frontend.py— Streamlit UI that loads and invokes the notebook workflow.pyproject.toml— project metadata and dependencies.State of Multimodal Large Language Models (LLMs) in 2026.md— an example generated post.
- The Streamlit UI offers a PDF download; PDF generation uses
reportlab. To enable it:
pip install reportlab- The generated PDF is a plain rendering of the markdown. For richer PDF styling (fonts, headings, code blocks), consider a markdown-to-PDF toolchain such as
weasyprintorpandoc+wkhtmltopdf.
- Python 3.12+
- LangChain + LangGraph for orchestration and LLM interactions
- Pydantic for structured outputs
- Streamlit for the frontend UI
- Optional: HuggingFace / OpenAI model endpoints
- PDF rendering is basic; complex layouts (code blocks, tables) may not render perfectly.
- Per-node live progress is not enabled by default; implementing a progress callback inside the notebook graph is recommended for detailed UI updates.
- Workflow execution may call external LLM APIs — ensure API keys and network access.
For questions or issues, open an issue or contact the author (Abbas Ali). Include logs and a description of steps to reproduce.
Author: Abbas Ali
