Skip to content
@jamjet-labs

JamJet

The agent-native runtime for production AI. Durable execution · Native MCP + A2A · Rust core · Python SDK.
JamJet

JamJet

The agent-native runtime for production AI.

Durable execution  ·  Native MCP + A2A  ·  Rust core  ·  Python SDK

PyPI License Docs


What is JamJet?

JamJet is a durable, graph-based workflow runtime for AI agents. Crash at step 7 of 12 — resume from step 7. Every step transition is an event you can inspect, replay, and test against. The Rust core handles durability and performance. The Python SDK handles everything developers actually touch.

from pydantic import BaseModel
from jamjet import Workflow

wf = Workflow("research-agent")

@wf.state
class State(BaseModel):
    query: str
    answer: str = ""

@wf.step
async def think(state: State) -> State:
    # your LLM call here — any model, any provider
    return state.model_copy(update={"answer": "..."})

# Local execution — no server needed
result = wf.run_sync(State(query="What is JamJet?"))

# Production — durable, crash-safe
# jamjet dev && jamjet run workflow.yaml --input '{"query": "..."}'

The stack

Protocol What it means
MCP native Connect any MCP server in one line — Brave Search, GitHub, Postgres, your own
A2A native Agents call other agents across machines and orgs via the open A2A standard
Rust runtime ~1ms framework overhead. Durable by default. No Python GIL contention at scale
Eval harness 4 scorer types, JSONL datasets, CI exit codes — test your agents like software

Repositories

🦀 jamjet

Rust runtime core + Python SDK. The main project — workflow IR, durable executor, MCP client, A2A server, eval harness, and CLI.

8 production-ready workflows: hello-agent, research-agent, code-reviewer, orchestrator, approval-workflow, support-bot, data-pipeline, sql-agent.

Head-to-head benchmarks vs LangGraph, migration guides from LangGraph / CrewAI / raw OpenAI SDK, and a full feature matrix.

Documentation, blog, benchmarks page, and migration guides — built with Astro + Starlight.


By the numbers

~1ms Framework overhead vs raw LLM call (benchmarked · see results)
4 Built-in eval scorer types — assertion, latency, cost, LLM-as-judge
4 Project templates — jamjet init --template <name>
8 Ready-to-run workflow examples
Apache 2.0 License — use it, fork it, build on it

Get started

pip install jamjet
jamjet init my-agent --template hello-agent
cd my-agent && jamjet dev

Full quickstart · Examples · Benchmarks

Pinned Loading

  1. examples examples Public

    Ready-to-run JamJet workflow examples

    Python

  2. jamjet-benchmarks jamjet-benchmarks Public

    JamJet benchmarks, migration guides, and feature comparisons vs LangGraph, CrewAI, and others

    Python

  3. jamjet jamjet Public

    Durable, agent-native AI runtime with native MCP + A2A support. Built in Rust, authored in Python

    Rust 2 1

Repositories

Showing 6 of 6 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…