Multi-Modal Whole-Body Control is a research-oriented framework for learning whole-body control policies for humanoid and general articulated robots from heterogeneous motion signals.
Built on NVIDIA Isaac Sim / Isaac Lab and RSL-RL, the framework targets large-scale parallel simulation and multi-modal imitation learning, with a focus on robust motion tracking and cross-modal embodiment.
The core philosophy of this repository is to treat whole-body control as a multi-modal sequence alignment problem:
a robot policy learns to coordinate its full-body dynamics by jointly conditioning on robot-centric states and external motion descriptors such as human body pose (SMPL-X) and SE(3) keypoints.
The framework currently focuses on Unitree G1, but is designed to be extensible to other humanoid or whole-body platforms.
-
Whole-body control at scale
Designed for thousands of parallel environments in Isaac Lab, enabling fast and stable on-policy learning. -
Multi-modal motion supervision
Policies can be conditioned on:- robot joint trajectories,
- human SMPL-X full-body motion,
- 3D / SE(3) keypoint trajectories.
-
Unified tracking & imitation
Motion tracking, multi-motion training, and GAE-Mimic–style imitation are expressed within a single task framework.
See docs/illustrations/demo.mp4
Click to expand repository structure
.
├── source/whole_body_control
│ ├── robots/ # Robot models and actuators (e.g., Unitree G1)
│ ├── tasks/ # Tracking / Multi-Tracking / GAEMimic tasks
│ ├── utils/ # Motion datasets, dataloaders, runners
│
├── scripts
│ ├── rsl_rl/
│ │ ├── train.py # Training entry point
│ │ └── play.py # Policy rollout (if available)
│ ├── tools/ # Env listing, random / zero agents
│ └── data/ # Dataset preprocessing utilities
│
├── third_party/rsl_rl # Vendorized RSL-RL
├── docs/env_setup.md # Detailed environment setup
└── README.md
Click to expand tasks and environments
All environments are implemented using Isaac Lab task abstractions.
TrackingEnvCfg- Single reference motion tracking
- Joint position control
- Dense rewards on pose, velocity, orientation, contacts
- Privileged observations for the critic
MultiTracking_TrackingEnvCfg- Samples from multiple motion clips
- Dataset-driven command interface
- Performance-based curriculum over motion difficulty
GAEMimic_TrackingEnvCfg- Conditions policy on:
- robot joint trajectories,
- SMPL-X human body pose,
- SE(3) keypoint trajectories
- Designed for cross-modal imitation and embodiment transfer
- Conditions policy on:
Click to expand robot model
Defined in whole_body_control/robots/g1.py using:
ArticulationCfgImplicitActuatorCfg
Includes:
- Full-body joint limits and initial posture
- Per-joint-group actuator stiffness and damping
- Action scaling for stable RL control
Click to expand installation steps
# Install Isaac Lab
git clone https://github.com/isaac-sim/IsaacLab.git
cd IsaacLab
git checkout 90b79bb2d44feb8d833f260f2bf37da3487180ba
./isaaclab.sh -i
# (Optional) install RSL-RL
./isaaclab.sh -p -m pip install -e path/to/rsl_rl
# Install this repository
cd source/whole_body_control
pip install -e .Click to expand dataset download
Preprocessed datasets (with SMPL-X and keypoints) are available at:
Unzip under the datasets/ directory.
Click to expand quick start
python scripts/tools/list_envs.pypython scripts/rsl_rl/train.py \
--headless \
--task MultiTracking-Flat-G1-v0Common options:
--num_envs--seed--max_iterations--video,--video_interval--distributed(multi-GPU)
- Isaac Lab components: BSD-3-Clause
- RSL-RL: see
third_party/rsl_rl - Robot assets (e.g., Unitree G1): subject to original licenses
