Skip to content

ros-claw/rosclaw

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 

Repository files navigation

🦾 ROSClaw

Software-Defined Embodied AI: The Operating System for Robotics.

Website License ROS 2 Python

English β€’ δΈ­ζ–‡ζ–‡ζ‘£ β€’ Quick Start β€’ Architecture β€’ Discord


"Stop building custom state machines. Start building intelligent agents."

ROSClaw is a visionary Embodied AI Operating System framework inspired by OpenClaw and the Entangled Action Pairs (EAP) paradigm. It acts as the ultimate bridge between Large Language Models (LLMs) (low-frequency reasoning) and the Robot Operating System (ROS 2) / VLA Policies (high-frequency physical control).

With ROSClaw, you can write your agent logic once and deploy it across humanoid, quadruped, or wheeled robots seamlessly.


✨ Core Features: The "Four Unifications"

  1. πŸ€– Cross-Embodiment (Semantic-HAL): Powered by e-URDF. Map high-level semantic intents (e.g., "Pick up the apple") to local hardware dynamics automatically. Your code runs on ANY robot morphology without modification.
  2. πŸ”Œ Cross-Tool (Embodied MCP): Native Model Context Protocol (MCP) integration. Every ROS Topic, Service, and Action is instantly translated into standard JSON schemas for LLMs to consume.
  3. 🧠 Cross-Algorithm (Brain-Cerebellum Router): The asynchronous bus linking 1Hz LLM reasoning with 100Hz+ Vision-Language-Action (VLA) control policies. Soft-real-time execution guaranteed.
  4. πŸ’Ύ Cross-Format (Unified Trajectory Engine): OS-level data interception. ROSClaw automatically time-syncs and packages heterogeneous sensor data into standard RLDS / HDF5 formats for Open X-Embodiment training.

πŸ”„ The Built-in Data Flywheel (Auto-EAP)

When a physical action fails, ROSClaw intercepts the error, triggers an inverse recovery policy to reset the environment, and logs the negative sample. Your robot gets smarter every single day, completely autonomously.


πŸ— Architecture

ROSClaw operates on a dual-node architecture, elegantly decoupling cognitive planning from physical execution:

graph TD
    subgraph Brain [Cognitive Layer - 1Hz]
        LLM[LLM / OpenClaw Agent]
        TMA[Task Management Agent]
    end

    subgraph ROSClawOS[ROSClaw Semantic Middleware]
        MCP[Embodied MCP Router]
        HAL[Semantic-HAL / e-URDF]
        SDA[System Diagnostic Agent]
        DF[Data Flywheel Engine]
    end

    subgraph Cerebellum [Execution Layer - 100Hz+]
        VLA[VLA Policy Plugins e.g., Ο€0.5, OpenVLA]
        ROS2[ROS 2 DDS Network]
        HW[Physical Robot Hardware]
    end

    LLM <-->|JSON Schema| MCP
    TMA --> MCP
    MCP <--> HAL
    HAL <-->|Semantic Intent| VLA
    VLA <-->|Torque/Velocity| ROS2
    ROS2 <--> HW

    SDA -.->|Monitor & Auto-Recover| ROS2
    HW -.->|Sensor Stream| DF
    DF -.->|Auto-Package RLDS| Storage[(Training Dataset)]
Loading

πŸš€ Quick Start

1. Installation

Zero configuration. Native ROS 2 compatibility. Ready in 60 seconds.

# Ensure you have ROS 2 (Humble/Iron/Jazzy) installed and sourced
curl -sSL https://rosclaw.io/install.sh | bash

2. Write Once. Embody Anywhere.

Create a Python file agent.py:

from rosclaw import EmbodiedAgent

# 1. Connect to ANY robot running the ROSClaw OS kernel
agent = EmbodiedAgent.connect("robot_ip")

# 2. Define a semantic task (No hardware-specific APIs needed!)
task = "Navigate to the kitchen, check if the table is clean. If not, pick up the trash."

# 3. Execute with OS-level safety and data collection
# Brain (LLM) plans -> Cerebellum (VLA) executes -> OS logs RLDS data
agent.execute(
    task, 
    auto_recovery=True,   # Enable Auto-EAP error recovery
    record_rlds=True      # Silently build your training dataset
)

3. Run your Agent

ros2 run rosclaw core_agent --script agent.py

πŸ—Ί Roadmap

  • v0.1: Embodied MCP Protocol & Dynamic ROS Tooling. -[x] v0.2: Asynchronous Brain-Cerebellum Routing.
  • v0.5: Auto-EAP Recovery & RLDS Data Flywheel Integration.
  • v1.0: Full e-URDF Support for Cross-Morphology (Humanoid & Quadruped) validation.

🀝 Contributing

We are building the open standard for the future of robotics. Contributions from researchers, ROS engineers, and AI developers are highly welcome!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Please read ourCONTRIBUTING.md for details on our code of conduct and development process.


πŸ“œ Acknowledgements & License

  • OpenClaw: For the groundbreaking digital AI Agent framework and MCP architecture.
  • RoboClaw Paper: For the inspiration on Entangled Action Pairs (EAP) and autonomous data collection loops.
  • ROS 2 Community: For providing the robust DDS middleware that powers the physical world.

Distributed under the Apache 2.0 License. See LICENSE for more information.

Defined by the Open Source Community. Built for the Physical World.
rosclaw.io

About

The "AUTOSAR + Android" for Embodied AI. An OS-level framework bridging LLMs with high-frequency ROS/VLA control, enabling "Write Once, Embody Anywhere" robotics and autonomous data flywheels.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages