TinyClaw is an edge fact node runtime for turning raw device signals into stable, observable business facts.
The current first-class profile is occupancy-camera-v1: a camera-driven
occupancy node that detects people, applies debounce and cooldown, models device
health, and exposes the result over MCP.
TinyClaw is not just a camera wrapper and not just an MCP adapter.
It sits between the physical edge world and northbound protocol consumers:
- Southbound: camera frames, detector outputs, device timing, runtime health
- Middle layer: semantic convergence, state transitions, heartbeat, recent events
- Northbound: MCP tools, resources, notifications,
stdio, and streamable HTTP
In short:
- MCP is the northbound protocol
- TinyClaw is the semantic runtime that decides what facts are worth publishing
Current default profile:
occupancy-camera-v1
Current published facts:
occupancy.changeddevice.heartbeatdevice.health_changed
Current MCP surface:
- Tools:
tinyclaw.get_occupancy_statustinyclaw.capture_frametinyclaw.get_configtinyclaw.get_device_healthtinyclaw.set_configtinyclaw.reset_state
- Resources:
tinyclaw://scene/{scene_id}/statetinyclaw://scene/{scene_id}/statstinyclaw://scene/{scene_id}/events/recenttinyclaw://scene/{scene_id}/health
- Notifications:
occupancy.changeddevice.heartbeatdevice.health_changed
Compatibility:
tinyclaw.get_occupancyis still accepted as an alias fortinyclaw.get_occupancy_status
TinyClaw V0.3 is structured in five layers:
Input PluginBrings raw world signals into the node. The current concrete implementation is a camera frame source.Semantic ProfileOwns profile-specific convergence. Foroccupancy-camera-v1, that includes detection-to-candidate translation, debounce, cooldown, transition generation, and profile snapshots.Node RuntimeOwns lifecycle, health, heartbeat, runtime snapshots, and loop execution.TinyClaw ServerOwns MCP tools/resources/notifications, recent events, and publishing.Transport AdapterMaps the MCP server ontostdioor streamable HTTP.
More detail:
A simple device MCP adapter usually exposes raw actions like:
- capture a frame
- run a detector
- read device status
TinyClaw goes one level higher. It publishes stabilized facts:
- occupancy transitions
- device heartbeat
- health transitions
- recent event context
That means it does more than protocol translation:
- state memory
- debounce/cooldown
- health modeling
- event buffering
- runtime configuration
- multi-transport exposure
cd /Users/xiaoyatao/tinyclaw
.venv/bin/pip install -e .[dev]After editable install, the project no longer depends on PYTHONPATH=src.
Start the MCP server over stdio:
tinyclaw-mcp --config config/occupancy.yamlStart the MCP server over streamable HTTP:
tinyclaw-mcp --config config/occupancy.yaml --transport http --host 127.0.0.1 --port 8000Run the local demo loop without MCP:
tinyclaw --demo --config config/occupancy.yamlTinyClaw currently ships with three demo layers:
demo-e2ePure deterministic internal driving for service semantics.demo-framesSemi-real visual input via fixture frames plus a mock detector.demo-real-cameraLive device demonstration with a real camera and optional fault injection.
.venv/bin/tinyclaw demo-e2e \
--config config/occupancy.yaml \
--scenario fixtures/e2e_happy_path.json \
--time-scale 0.01 \
--heartbeat-interval 2 \
--dump-events /tmp/tinyclaw_e2e_events.json.venv/bin/tinyclaw demo-frames \
--config config/occupancy.yaml \
--frames fixtures/frames/happy_path \
--speed 100 \
--heartbeat-interval 2 \
--dump-events /tmp/tinyclaw_frame_events.json.venv/bin/tinyclaw demo-real-camera \
--config config/demo_real_camera.yaml \
--duration 30 \
--dump-events /tmp/tinyclaw_real_events.json \
--heartbeat-interval 2 \
--simulate-fault camera_timeout \
--no-displayReal-device notes:
Run tests:
pytest -qRun lint:
ruff check .Minimal remote MCP client smoke example:
python examples/http_client_smoke.pyWhat is already done:
- first-class
occupancy-camera-v1profile - runtime/server/transport split
stdioand streamable HTTP transports- health model and heartbeat
- recent events buffer
- deterministic, frame-based, and real-camera demos
What is not done yet:
- multi-profile runtime
- multi-device orchestration
- package/manifest-based declarative assembly
- long-term persistence
- cloud registry or package marketplace
The next planned product shape is a declarative node assembly platform:
device_package.yamlnode_manifest.yamlregistryassemblervalidator
That work builds on the current V0.3 runtime split instead of replacing it.