TidyVerse is a research project on long-horizon household manipulation with dexterous mobile robots.
The goal is to enable robots with multi-finger hands and a mobile base to perform room-scale tidying tasks—such as arranging shoes and folding clothes—by learning from unstructured human videos.
For a deeper understanding of the system, see the docs/ folder:
docs/dragonbot_control_architecture.md— full control stack architecturedocs/teleop_testing_guide.md— step-by-step teleoperation testing guide
Dragonbot v0.2 is our custom dexterous mobile manipulation platform combining a holonomic base, a YAM 6-DOF arm, and an Aero Hand Open dexterous hand.
URDF models and MuJoCo scene files for visualization and simulation are available in simulation/:
simulation/dragonbot/— full Dragonbot scenesimulation/yam_with_hand/— YAM arm + hand combined scenesimulation/assets/— STL meshes for the base, arm, and gripper
A complete ROS2-based communication and control stack is available in ros2/, covering:
- Base: SpaceMouse teleoperation (
dragonbot_teleop) - Arm: Quest 3 wrist-pose IK teleoperation (
dragonbot_teleop) - Hand: Quest 3 dexterous retargeting to 16-DOF joint commands (
dragonbot_teleop) - Hardware drivers: Aero Hand Open node and message types (
aero_hand_open,aero_hand_open_msgs) - Simulation viewers: MuJoCo passive viewers for base + arm + hand
The dexterous hand hardware and ROS2 drivers are based on TetherIA/aero-hand-open. We thank the TetherIA team for open-sourcing the Aero Hand Open platform.
We are adapting NVIDIA GR00T-N1.6 as the visuomotor control policy for Dragonbot. Our ongoing fork with Dragonbot embodiment configs, a ROS2 policy bridge, and deployment guides is available at:
👉 dragonlong/Isaac-GR00T (see third_party/Isaac-GR00T/)
Key additions in the fork:
examples/tidyverse-hand/— ROS2 policy bridge, FastDDS config, deployment guidegr00t/configs/data/embodiment_configs.py— Dragonbot embodiment definitiondragonbot_finetune.md— step-by-step fine-tuning guide
TidyVerse focuses on three key challenges:
- Long-horizon tasks involving multiple objects and sequential subgoals
- Dexterous manipulation with multi-finger robot hands
- Human video supervision without paired human–robot demonstrations
We introduce TidyMimic++, an intent-level imitation framework that distills object-centric task structure from human videos and grounds it in robot execution.
Example tasks in TidyVerse include:
- Shoe arrangement and alignment
- Cloth flattening and folding (TBD)
- Room-scale cleanup with navigation and manipulation (TBD)
- Dragonbot v0.2 hardware platform — holonomic base + YAM 6-DOF arm + Aero Hand Open (16-DOF dexterous hand)
- Simulation models — MuJoCo URDF/XML scenes for Dragonbot, YAM+hand, and Stanford TidyBot in
simulation/ - ROS2 full-stack controls — SpaceMouse base teleoperation, Quest 3 wrist-pose IK arm teleoperation, Quest 3 hand retargeting; all in
ros2/ - Data pipeline — rosbag/MCAP → LeRobot dataset conversion; sample:
littledragon/evan_house_split_1g_lerobot_v3 - GR00T-N1.6 integration — Dragonbot embodiment config, ROS2 policy bridge, and deployment guide in
third_party/Isaac-GR00T/examples/tidyverse-hand/ - pi-0.5 fine-tuning + serving (LIBERO) — W&B logs
- GR00T-N1.6 fine-tuning on Dragonbot data — see
third_party/Isaac-GR00T/dragonbot_finetune.mdfor instructions (coming soon) - pi-0.5 SFT with adaptive hand grasping
- Co-training with human egocentric data (VITRA) + EgoDex
- End-to-end spatial grounding + sequential task execution
- Logs: W&B run group
For research use only.
