Partenit is an open-source middleware between your robot's high-level AI planner and its low-level motors. It guarantees that no action executes without formal validation, and every decision is logged with a cryptographic fingerprint for audit.
Install it today to:
- Add a safety guard to your robot in one line of code
- Test your safety policies before deploying to hardware
- Record every robot decision for debugging and incident investigation
- Run safety scenarios in Isaac Sim, ROS2, or a pure-Python simulation
- Measure your robot's safety grade (A–F) against standard scenarios
pip install partenit-core partenit-agent-guard partenit-safety-bench \
partenit-policy-dsl partenit-decision-log partenit-adaptersFrom source: git clone https://github.com/partenit/partenit-infrastructure.git && cd partenit-infrastructure && ./scripts/install.sh
from partenit.adapters import MockRobotAdapter
from partenit.agent_guard import GuardedRobot
# Prepare a scene (your adapter gives real sensor data)
adapter = MockRobotAdapter()
adapter.add_human("worker-1", x=1.2, y=0.0) # worker 1.2 m away
# One line to add full safety guard + decision logging
robot = GuardedRobot(
adapter=adapter,
policy_path="examples/warehouse/policies.yaml",
session_name="my_test",
)
decision = robot.navigate_to(zone="shipping", speed=2.0)
print(decision.allowed) # True (guard allows, but clamps speed)
print(decision.modified_params) # {'zone': 'shipping', 'speed': 0.3}
print(decision.risk_score.value) # 0.64 (human at 1.2 m → high risk)
print(decision.applied_policies) # ['human_proximity_slowdown']The guard automatically:
- fetches sensor observations from the adapter
- evaluates all policies
- clamps or blocks the action if needed
- logs a signed
DecisionPacketfor audit
Swap MockRobotAdapter → IsaacSimAdapter / ROS2Adapter / HTTPRobotAdapter — the guard stays identical.
If you develop or test robots in NVIDIA Isaac Sim, you get the same guard, logging, and grading with one adapter and a small bridge inside the sim. Quick path: start the H1 bridge in Isaac Sim (examples/isaac_sim/h1_bridge.py), then run python examples/isaac_sim/minimal_guard_demo.py to see one guarded command and the decision in the console. Full guide: docs/guides/isaac-sim.md.
| Problem | Partenit tool |
|---|---|
| "Start a new guarded robot project" | partenit-init my_robot — scaffold in 5 seconds |
| "My robot stopped — explain why in plain English" | partenit-why decisions/session_01/ |
| "Show me live decisions as they happen" | partenit-watch decisions/ — live TUI, refreshes every 500 ms |
| "My robot does something unsafe — why?" | partenit-log replay decisions/ — visual timeline of every decision |
| "Is my controller safe?" | partenit-eval run scenario.yaml — grades A–F with collision/near-miss metrics |
| "Which policy fires at distance 1.2 m?" | partenit-policy sim --human-distance 1.2 --policy-path policies/ |
| "How does v2 policy differ from v1?" | partenit-policy diff policies/v1.yaml policies/v2.yaml |
| "Validate policies on every PR automatically" | GitHub Action — 3 lines of YAML |
| "I need to run a scenario in Isaac Sim" | Isaac Sim guide + IsaacSimAdapter |
| "I want to compare two controllers" | partenit-eval run scenario.yaml --compare baseline.yaml v2.yaml |
from partenit.agent_guard import GuardedRobot
robot = GuardedRobot(adapter, policy_path="policies/", session_name="test")
robot.navigate_to(zone="A3", speed=2.0) # auto-guarded, auto-loggedThe guard intercepts every action, evaluates all policies, clamps parameters if needed, and stores a signed decision packet. Zero changes to your robot code.
partenit-eval run examples/benchmarks/human_crossing_path.yaml \
--report eval.htmlCompare baseline (no guard) vs guarded controller on the same scenario:
from partenit.safety_bench.eval import EvalRunner, ControllerConfig
runner = EvalRunner()
report = runner.run_scenario(
"examples/benchmarks/human_crossing_path.yaml",
controllers=[
ControllerConfig("baseline", policy_paths=[]),
ControllerConfig("guarded", policy_paths=["policies/warehouse.yaml"]),
],
)
print(report.summary_table())
# baseline F safety=0.12 efficiency=0.30 overall=0.15
# guarded B safety=0.92 efficiency=0.78 overall=0.87Metrics: collision rate, near-miss rate, min human distance, task completion, unsafe acceptance rate, AI quality — all combined into a weighted grade (A–F). HTML report opens in any browser, no server required.
partenit-why decisions/session_01/╭─ Decision Explanation ──────────────────────────────╮
│ Action : navigate_to(zone='C2', speed=2.0) │
│ Time : 2026-03-08 14:23:41 UTC │
│ Status : ● BLOCKED Risk score: 0.92 │
│ │
│ Why BLOCKED: │
│ → Rule fired: emergency_stop │
│ │
│ Risk contributors: │
│ human_distance 0.85 ████████████ │
│ speed 0.45 ███████ │
│ │
│ Fingerprint: ✓ VALID │
╰──────────────────────────────────────────────────────╯
Works with a single JSON file, JSONL log, or a decisions directory.
partenit-watch decisions/┌─ Partenit Guard Monitor ─── session_01 ─── total=24 blocked=1 modified=6 ─┐
│ Time Status Action Risk Policies / Reason │
│ 14:23:41 BLOCKED navigate_to 0.92 emergency_stop │
│ 14:23:39 MODIFIED navigate_to 0.64 human_proximity_slowdown │
│ 14:23:35 ALLOWED navigate_to 0.12 │
└────────────────────────────────────────────────────────────────────────────┘
Refreshes every 500 ms as new decisions arrive. Ctrl+C to stop.
partenit-log replay decisions/my_test/ # rich terminal timeline
partenit-log replay decisions/ --output timeline.html # shareable HTMLDecision Replay — my_test (12 packets)
──────────────────────────────────────────
0.0s [ALLOWED ] navigate_to speed=1.5 risk=0.21
2.1s [MODIFIED ] navigate_to speed=0.3 risk=0.64 → human_proximity_slowdown
3.0s [BLOCKED ] navigate_to risk=0.91 → emergency_stop
partenit-policy sim \
--action navigate_to \
--speed 2.0 \
--human-distance 1.2 \
--policy-path examples/warehouse/policies.yamlShows exactly which rules fire, what parameters are clamped, and the final allowed/blocked result. No hardware, no simulation — instant feedback.
partenit-policy diff policies/v1.yaml policies/v2.yaml
partenit-policy diff policies/v1/ policies/v2/ \
--scenario examples/benchmarks/human_crossing_path.yamlPolicy diff: policies/v1.yaml → policies/v2.yaml
─────────────────────────────────────────────────
+ human_proximity_slowdown (added)
~ emergency_stop (changed: threshold 0.5 → 0.8)
= speed_limit_zone_a (unchanged)
Scenario impact (human_crossing_path):
Metric v1 v2 Δ
block_rate 0.10 0.30 +0.20
near_miss_rate 0.40 0.15 -0.25
Immediately see what changed between policy versions and how it affects safety outcomes.
partenit-scenario run examples/benchmarks/human_crossing_path.yaml \
--with-guard --without-guard \
--report report.htmlBuilt-in scenarios:
human_crossing_path— worker crosses robot trajectoryblind_spot— human in low-confidence detection zonellm_unsafe_command— LLM requests unsafe speed near a humansensor_degradation— trust degrades mid-mission; conservative fallbackpolicy_conflict_determinism— priority-based conflict resolution (100% deterministic)
The guard and policies are identical across all platforms — only the adapter changes:
| Platform | Adapter |
|---|---|
| Pure Python (no hardware) | MockRobotAdapter |
| Any HTTP robot | HTTPRobotAdapter |
| ROS2 | ROS2Adapter |
| NVIDIA Isaac Sim | IsaacSimAdapter |
| Unitree robots | UnitreeAdapter |
| Gazebo | GazeboAdapter |
| LLM tool calls | LLMToolCallGuard |
| MoveIt / Open RMF | Planned — stubs only, not functional yet |
# Development / simulation
adapter = MockRobotAdapter()
# Real ROS2 robot — only this line changes
adapter = ROS2Adapter(node_name="partenit_guard")
# Isaac Sim (H1 bridge example uses port 8000)
adapter = IsaacSimAdapter(base_url="http://localhost:8000")
# Everything below is identical regardless of adapter
robot = GuardedRobot(adapter, policy_path="policies/warehouse.yaml")
decision = robot.navigate_to(zone="shipping", speed=1.8)rule_id: human_proximity_slowdown
name: "Human Proximity Speed Limit"
priority: safety_critical
provenance: "ISO 3691-4 section 5.2"
condition:
type: threshold
metric: human.distance
operator: less_than
value: 1.5
unit: meters
action:
type: clamp
parameter: max_velocity
value: 0.3
unit: m/sValidate, bundle, and check for conflicts:
partenit-policy validate ./policies/
partenit-policy check-conflicts ./policies/
partenit-policy bundle ./policies/ --output bundle.jsonEvery action — allowed, clamped, or blocked — produces a signed DecisionPacket:
from partenit.decision_log import DecisionLogger
log = DecisionLogger(storage_dir="decisions/session_01")
packet = log.create_packet(
action_requested="navigate_to",
action_params={"zone": "shipping", "speed": 2.0},
guard_decision=decision,
)
print(log.verify_packet(packet)) # True — SHA256 fingerprint verifiedVerify integrity after the fact:
partenit-log verify decisions/session_01/
partenit-log inspect <packet_id>partenit/
├── packages/
│ ├── core/ # Shared types and contracts (Pydantic v2)
│ ├── policy-dsl/ # YAML policy language + parser + validator
│ ├── trust-engine/ # Sensor/object trust degradation model
│ ├── agent-guard/ # GuardedRobot + action safety middleware
│ ├── safety-bench/ # Simulation sandbox + scenario runner + eval
│ ├── decision-log/ # DecisionPacket format + storage + verification
│ └── adapters/ # Robot adapters: Mock, ROS2, HTTP, Isaac Sim, …
├── analyzer/ # Web UI: FastAPI backend + React frontend
├── schemas/ # JSON Schemas and OpenAPI spec
├── docs/ # Guides and reference documentation
└── examples/ # Runnable demos
Open in this repository:
- Policy DSL + policy engine
- Basic risk scoring (distance + velocity + trust)
- Mock / HTTP / ROS2 / Isaac Sim / Unitree / Gazebo adapters
- Safety bench + all built-in scenarios
- Decision log + fingerprint verification
- Analyzer web UI
- JSON Schemas for
DecisionPacketandDecisionFingerprint - All examples and documentation
Enterprise (closed, not in this repo):
- Conformal prediction with coverage guarantees
- Plan-conditional risk scoring
- GraphRAG policy retrieval
- Formal verification (CBF / STL)
- Fleet coordination and policy broadcast
- Cloud sync and managed storage
- Compliance export tooling (ISO, audit documents)
- Policy authoring UI
Add automatic safety checks to any robot project in 3 lines:
# .github/workflows/safety.yml
name: Safety Check
on:
push:
paths: ['policies/**']
pull_request:
paths: ['policies/**']
jobs:
safety:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: GradeBuilderSL/partenit@main
with:
policy-path: policies/Every pull request that modifies policies will automatically:
- Validate YAML syntax and policy semantics
- Detect conflicting rules (PR fails if conflicts are found)
Add scenario: to also run a safety simulation:
- uses: GradeBuilderSL/partenit@main
with:
policy-path: policies/
scenario: tests/scenarios/human_crossing.yamlThe simulation runs your scenario with guard enabled and without it, then uploads an HTML safety report as a workflow artifact.
Action inputs:
| Input | Default | Description |
|---|---|---|
policy-path |
policies/ |
Path to policy file or directory |
scenario |
— | Scenario YAML for safety simulation (optional) |
python-version |
3.11 |
Python version |
fail-on-conflict |
true |
Exit 1 if conflicting policies found |
report-name |
partenit-safety-report |
Artifact name for HTML report |
See CONTRIBUTING.md for setup, architecture rules, and the PR process. See docs/ for full guides on Isaac Sim, ROS2, custom robots, and writing policies. See PRODUCTION_READINESS.md for a release checklist and the two main user paths (no hardware + simulation).
Made with love for the future at Partenit.

