- Release training code
- Release sim2sim code
- Release sim2real code
Create mamba/conda environment, in the following we use conda for example, but you can use mamba as well.
conda create -n fcgym python=3.8
conda activate fcgym
Download IsaacGym and extract:
wget https://developer.nvidia.com/isaac-gym-preview-4
tar -xvzf isaac-gym-preview-4
Install IsaacGym Python API:
pip install -e isaacgym/python
Test installation:
cd isaacgym/python/examples
python 1080_balls_of_solitude.py # or
python joint_monkey.py
For libpython error:
- Check conda path:
conda info -e
- Set LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=</path/to/conda/envs/your_env/lib>:$LD_LIBRARY_PATH
git clone https://github.com/LeCAR-Lab/FALCON.git
cd FALCON
pip install -e .
pip install -e isaac_utils
Please refer to PHC.
Training Command
python humanoidverse/train_agent.py \
+exp=decoupled_locomotion_stand_height_waist_wbc_diff_force_ma_ppo_ma_env \
+simulator=isaacgym \
+domain_rand=domain_rand_rl_gym \
+rewards=dec_loco/reward_dec_loco_stand_height_ma_diff_force \
+robot=g1/g1_29dof_waist_fakehand \
+terrain=terrain_locomotion_plane \
+obs=dec_loco/g1_29dof_obs_diff_force_history_wolinvel_ma \
num_envs=4096 \
project_name=g1_29dof_falcon \
experiment_name=g1_29dof_falcon \
+opt=wandb \
obs.add_noise=True \
env.config.fix_upper_body_prob=0.3 \
robot.dof_effort_limit_scale=0.9 \
rewards.reward_initial_penalty_scale=0.1 \
rewards.reward_penalty_degree=0.0001
Evaluation Command
python humanoidverse/eval_agent.py \
+checkpoint=<path_to_your_ckpt>
After around 6k
iterations, in IsaacGym
:
![]() |
Training Command
python humanoidverse/train_agent.py \
+exp=decoupled_locomotion_stand_height_waist_wbc_diff_force_ma_ppo_ma_env \
+simulator=isaacgym \
+domain_rand=domain_rand_rl_gym \
+rewards=dec_loco/reward_dec_loco_stand_height_ma_diff_force \
+robot=t1/t1_29dof_waist_wrist \
+terrain=terrain_locomotion_plane \
+obs=dec_loco/t1_29dof_obs_diff_force_history_wolinvel_ma \
num_envs=4096 \
project_name=t1_29dof_falcon \
experiment_name=t1_29dof_falcon \
+opt=wandb \
obs.add_noise=True \
env.config.fix_upper_body_prob=0.3 \
robot.dof_effort_limit_scale=0.9 \
rewards.reward_initial_penalty_scale=0.1 \
rewards.reward_penalty_degree=0.0001 \
rewards.feet_height_target=0.08 \
rewards.feet_height_stand=0.02 \
rewards.desired_feet_max_height_for_this_air=0.08 \
rewards.desired_base_height=0.62 \
rewards.reward_scales.penalty_lower_body_action_rate=-0.5 \
rewards.reward_scales.penalty_upper_body_action_rate=-0.5 \
env.config.apply_force_pos_ratio_range=[0.5,2.0]
Evaluation Command
python humanoidverse/eval_agent.py \
+checkpoint=<path_to_your_ckpt>
After around 6k
iterations, in IsaacGym
:
![]() |
We provide seamless sim2sim and sim2real deployment scripts supporting both unitree_sdk2_python and booster_robotics_sdk. Please refer to this README for details.
![]() |
FALCON can be easily extended to larger workspace by setting larger torso command range and base height command range. We provide the sim2sim result of Unitree G1 with larger command range as an example:
falcon-ext.mp4
If you find our work useful, please consider citing us!
@article{zhang2025falcon,
title={FALCON: Learning Force-Adaptive Humanoid Loco-Manipulation},
author={Zhang, Yuanhang and Yuan, Yifu and Gurunath, Prajwal and He, Tairan and Omidshafiei, Shayegan and Agha-mohammadi, Ali-akbar and Vazquez-Chanlatte, Marcell and Pedersen, Liam and Shi, Guanya},
journal={arXiv preprint arXiv:2505.06776},
year={2025}
}
Other work also using FALCON's dual-agent framework:
@article{li2025softa,
title={Hold My Beer: Learning Gentle Humanoid Locomotion and End-Effector Stabilization Control},
author={Li, Yitang and Zhang, Yuanhang and Xiao, Wenli and Pan, Chaoyi and Weng, Haoyang and He, Guanqi and He, Tairan and Shi, Guanya},
journal={arXiv preprint arXiv:2505.24198},
year={2025}
}
FALCON is built upon ASAP and HumanoidVerse.
This project is licensed under the MIT License - see the LICENSE file for details.