Code-first agentic workflow graphs for Python, with a read-only debugger UI, structured node outputs, live events, and tracing.
agentic-workgraph treats Python as the source of truth. You define workflows with decorated async functions, the runtime derives the graph, executes list-shaped node work, records state, and exposes a FastAPI API plus an embedded /ui debugger.
@nodeand@workflowdecorators- eager graph tracing from Python workflow definitions
- async execution with list fan-out and per-node concurrency
- Pydantic output validation
- in-memory store and Redis-backed store support
- run history, version metadata, resume support, and checkpoints
- live WebSocket events for run and node updates
- streamed
ctx.llm(...)token capture and playback - OpenTelemetry spans and trace inspection APIs
- embedded
/uihistory and debugger surface
The full design target lives in spec.md.
Requires Python 3.10+.
python3 -m venv .venv
.venv/bin/python -m pip install --upgrade pip
.venv/bin/python -m pip install -e '.[dev]'Runtime dependencies are documented in pyproject.toml, including:
fastapipydanticredisopentelemetry-apiopentelemetry-sdkuvicorn
Run the test suite:
.venv/bin/python -m pytest -qLaunch the demo app:
.venv/bin/python -m uvicorn demo_app:app --host 0.0.0.0 --port 8081Then open:
- API docs surface:
http://127.0.0.1:8081/docs - Embedded debugger UI:
http://127.0.0.1:8081/ui
The demo app in demo_app.py includes:
hello-flow: the smallest end-to-end workflowresearch-demo: fan-out summaries, progress updates, stream playback, and traceable runsexample-iterative-refinement: loop modeling in the embedded UI
The example library in examples/README.md adds a broader set of runnable workflows for common agentic patterns.
The embedded UI also supports launching a fresh run directly from the selected workflow with the Run Workflow button.
agentic-workgraph also exposes a workgraph CLI that talks to an already-running API server. It does not launch the server itself.
List workflows:
workgraph workflowsInspect a workflow's expected input arguments and defaults:
workgraph launch-spec thalis-concept-intake-to-packetLaunch a workflow with named args:
workgraph run thalis-concept-intake-to-packet --prompt-text="A cathedral grown from black coral and sea-glass"Watch a run until completion:
workgraph run thalis-concept-intake-to-packet --wait --prompt-text="A cathedral grown from black coral and sea-glass"
workgraph status <run-id> --watchPrint the final artifact after waiting, or fetch it later from a past run:
workgraph run thalis-concept-intake-to-packet --wait --artifact --prompt-text="A cathedral grown from black coral and sea-glass"
workgraph status <run-id> --watch --artifact
workgraph artifact <run-id>By default the CLI targets http://127.0.0.1:8081. Override that with --base-url or WORKGRAPH_BASE_URL.
Minimal example:
from workgraph import node, workflow
@node(id="hello")
async def hello(ctx, name: str):
return f"hello {name}"
@workflow(name="hello-flow")
def hello_flow():
return hello(name=["world"])Node functions are scalar. The runtime handles list-shaped execution, concurrency, progress accounting, retries, validation, tracing, and storage.
Workflows can also launch other workflows with run_subgraph(...).
from workgraph import node, run_subgraph, workflow
@workflow(name="child-flow")
def child_flow(claims: list[str]):
return expand_claim(claim=claims)
@workflow(name="parent-flow")
def parent_flow():
claims = seed_claims(topic=["subgraphs"])
return run_subgraph(
workflow=child_flow,
id="child_flow_run",
kwargs={"claims": claims},
)run_subgraph(...) treats the child workflow as one node in the parent graph, but the child execution is recorded as a real linked run with its own history, trace spans, graph, and final artifact. In the UI, subgraph nodes show a title-bar indicator and can navigate directly into the child run.
src/workgraph: runtime, API, storage, tracing, testing helperssrc/workgraph/ui: embedded static debugger UIexamples: runnable example workflows and example appdocs: agentic pattern documentation and example library notestests: smoke and API coveragedemo_app.py: runnable demo workflowsspec.md: design target for v1
docs/workflow-authoring.md: how to design, register, test, and verify new workflowsdocs/downstream-integration.md: how to embed project-local workflow packages with shared app wiring, fixtures, prompts, and deploymentdocs/example-library.md: what each example workflow demonstratesdocs/agentic-patterns.md: guidance on pipeline, fan-out, branching, loops, scratchpads, and recoveryagent.md: contributor guidelines for making surgical changes in this repo
The example library currently includes:
example-helloexample-fanout-researchexample-conditional-reviewexample-iterative-refinementexample-scratchpad-collaborationexample-subgraph-childexample-subgraph-parentexample-live-weather-capture
Run the example app with:
.venv/bin/python -m uvicorn examples.app:app --host 0.0.0.0 --port 8081agentic-workgraph includes first-party Ollama adapters:
from workgraph import Executor, create_ollama_cloud_llm, create_ollama_llm
local_llm = create_ollama_llm(model="gemma3")
cloud_llm = create_ollama_cloud_llm(model="kimi-k2.5:cloud")
executor = Executor(llm_callable=local_llm)Local defaults:
- base URL:
http://localhost:11434/api - no auth required
Direct Ollama Cloud defaults:
- base URL:
https://ollama.com/api - requires
OLLAMA_API_KEY,OLLAMA_CLOUD_API_KEY, or an explicitapi_key=...
The adapter uses Ollama's generate API so it fits the current ctx.llm(prompt=...) contract without introducing a separate chat-message abstraction.
example-live-weather-capture is the real-world reference workflow in the library. It fetches live weather data over HTTP and writes a real screenshot artifact to disk.
There are three straightforward ways to launch a workflow run today.
Open /ui, select a workflow, and click Run Workflow. The UI calls the existing workflow run API and then selects the new run automatically.
If another system needs to trigger jobs, the cleanest boundary is the workflow run API:
curl -X POST http://127.0.0.1:8081/api/workflows/example-fanout-research/runsIf you want custom request handling, add your own FastAPI route beside create_app() and call the executor directly:
from fastapi import Request
from workgraph import create_app
from examples.workflows import fanout_research
app = create_app(workflows=[fanout_research])
@app.post("/webhooks/research")
async def launch_research(request: Request):
payload = await request.json()
run = await app.state.executor.run(
fanout_research,
seed=[payload.get("seed", "agentic")],
)
return {"run_id": run.run_id, "status": run.status}For a single host, cron can call the same workflow run API:
*/30 * * * * curl -fsS -X POST http://127.0.0.1:8081/api/workflows/example-live-weather-capture/runs >/dev/nullIn Kubernetes, the equivalent is a CronJob that hits the same endpoint:
apiVersion: batch/v1
kind: CronJob
metadata:
name: workgraph-weather
spec:
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: trigger
image: curlimages/curl:8.8.0
args:
- -fsS
- -X
- POST
- http://workgraph:8081/api/workflows/example-live-weather-capture/runsThe important design point is that UI launches, webhooks, and scheduled jobs can all use the same workflow execution surface instead of separate orchestration code paths.
Redis is a required dependency because v1 needs a real state backend. The project includes both an in-memory store and a Redis-backed store adapter. Use Redis when you want shared state across processes or a closer-to-production runtime shape.
This is an active v1 build, not a finished product. The current implementation already covers the core execution loop, observability, live UI, and Redis support, but the spec remains the authoritative target for anything not yet implemented.