Graph-native multi-agent fleet for Python. BYO-key. Local-first. Ships with a UI.
Most multi-agent frameworks are heavy and locked to one ecosystem. LangGraph is tied to LangChain. CrewAI is opinionated about roles. AutoGen is conversation-first. bottensor-fleet is a small, graph-native runtime that runs anywhere Python runs, lets you bring your own provider key, and ships with a real UI in the wheel.
pip install 'bottensor-fleet[search]'
export ANTHROPIC_API_KEY=sk-ant-...Extras: [search] adds web tools, [redis] adds Redis checkpointing, [all] gets everything.
import asyncio
from fleet import Agent, Graph
from fleet.core.state import GraphState
from fleet.providers.client import FleetLLM
llm = FleetLLM("claude", "claude-sonnet-4-6")
researcher = Agent(name="researcher", llm=llm, tools=["web_search", "web_fetch"])
graph = (
Graph("solo")
.add_node("researcher", researcher.step)
.set_entry("researcher")
.set_exit("researcher")
.compile()
)
state = asyncio.run(graph.run(GraphState(goal="What is ReasoningBank?")))
print(state.messages[-1].content)fleet uiOpens a local dashboard at http://localhost:8765 with a live DAG view, per-agent logs, token spend, and run history.
| Command | What it does |
|---|---|
fleet new <name> |
Scaffold a new graph |
fleet run <graph.py> |
Run a graph from a file |
fleet ui |
Launch the local dashboard |
fleet add-agent |
Append an agent to an existing graph |
fleet ls |
List past runs |
fleet --version |
Print version |
- Graph-native: DAGs with conditional edges and bounded cycles, executed async with
asyncio.gatherfor parallel fan-out. - BYO-key: Provider abstraction via polyrt. Anthropic and OpenAI in the default install; MLX, Ollama, and others via polyrt extras.
- Checkpointed: Every run persists to SQLite (default) or Redis (opt-in via
[redis]extra). - Tools and skills:
@tooldecorator auto-derives JSON schemas from type hints.@skillfor higher-level capabilities. Web search and fetch built in via the[search]extra. - UI in the wheel: No separate Node install for users. The React + Vite frontend is bundled into the published wheel.
| bottensor-fleet | LangGraph | CrewAI | AutoGen | |
|---|---|---|---|---|
| Graph topology | ✅ DAG + cycles | ✅ | ❌ role-based | ❌ conversation |
| Provider-agnostic | ✅ via polyrt | |||
| Ships with UI | ✅ | ❌ | ❌ | |
| Pip-install size | ~150 KB wheel | heavy | medium | heavy |
| LangChain dependency | ❌ | ✅ required | ❌ | ❌ |
- v0.2 — ReasoningBank (Ouyang et al., ICLR 2026): self-evolving agents that learn from successful and failed trajectories. Memory-aware test-time scaling (MaTTS).
- v0.3 — MLX embedder, sequential MaTTS, distributed scheduler.
- v0.4 — Vector memory backend, cloud deploy templates.
The python_exec tool is unsandboxed in v0.1.x. Do not run untrusted graphs. A Docker sandbox lands in v0.2.
Apache-2.0. © 2026 Rama Krishna Bachu.
Built on polyrt. ReasoningBank design (v0.2) follows Ouyang et al., ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory, ICLR 2026.