Skip to content

noshaba/SIGGRAPHAsia2023-Fatigue

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

[SIGGRAPH Asia 2023] Discovering Fatigued Movements for Virtual Character Animation

Noshaba Cheema | Rui Xu | Nam Hee Kim | Perttu Hämäläinen | Vladislav Golyanik
Marc Habermann | Christian Theobalt | Philipp Slusallek
SIGGRAPH Asia 2023 Conference Paper

Website | Technical Paper | Video


Abstract
Virtual character animation and movement synthesis have advanced rapidly during recent years, especially through a combination of extensive motion capture datasets and machine learning. A remaining challenge is interactively simulating characters that fatigue when performing extended motions, which is indispensable for the realism of generated animations. However, capturing such movements is problematic, as performing movements like backflips with fatigued variations up to exhaustion raises capture cost and risk of injury. Surprisingly, little research has been done on faithful fatigue modeling. To address this, we propose a deep reinforcement learning-based approach, which -- for the first time in literature -- generates control policies for full-body physically simulated agents aware of cumulative fatigue. For this, we first leverage Generative Adversarial Imitation Learning (GAIL) to learn an expert policy for the skill; Second, we learn a fatigue policy by limiting the generated constant torque bounds based on endurance time to non-linear, state- and time-dependent limits in the joint-actuation space using a Three-Compartment Controller (3CC) model. Our results demonstrate that agents can adapt to different fatigue and rest rates interactively, and discover realistic recovery strategies without the need for any captured data of fatigued movement.


BibTex

If you find this code useful in your research, please cite:

@inproceedings{cheema2023fatigue,
  title = {Discovering Fatigued Movements for Virtual Character Animation},
  author = {Cheema, Noshaba and Xu, Rui and Kim, Nam Hee and H{\"a}m{\"a}l{\"a}inen, Perttu and Golyanik, Vladislav and Habermann, Marc and Theobalt, Christian and Slusallek, Philipp},
  booktitle={ACM SIGGRAPH 2023 Conference Proceedings},
  year = {2023}
}

About this repository

This repository contains an implementation of the code for our SIGGRAPH Asia paper "Discovering Fatigued Animations for Virtual Character Animation".

Installation

Download the Isaac Gym Preview 4 release from the website, then follow the installation instructions in the documentation. We highly recommend using a conda environment to simplify set up.

Ensure that Isaac Gym works on your system by running one of the examples from the python/examples directory, like joint_monkey.py. Follow troubleshooting steps described in the Isaac Gym Preview 4 install instructions if you have any trouble running the samples.

Once Isaac Gym is installed and samples work within your current python environment, install this repo:

pip install -e .

Running pre-trained models

Run Fatigued Tornado Kick

To run the pre-trained fatigued Tornado kick with (F, R, r)=(2.0, 0.05, 1.0) use the following command:

python train.py seed=477 task=HumanoidAMPHandsFatigue train=HumanoidAMPPPOLowGPFatigue headless=False num_envs=1 test=True checkpoint=runs/wushu_fatigue/nn/wushu_fatigue_6000.pth task.env.motion_file=amp_humanoid_wushu_kick.npy task.env.useFatigue=True task.env.fatigueF=2.0 task.env.fatigueR=0.05 task.env.fatigue_r=1.0 task.env.visualizeFatigue=True task.env.showSkyWall=True
Run Non-Fatigued Tornado Kick Expert

To run the pre-trained Tornado kick expert not yet fine-tuned on fatigue use the following command:

python train.py seed=1337 task=HumanoidAMPHandsFatigue train=HumanoidAMPPPOLowGPFatigue headless=False checkpoint=runs/wushu_expert/nn/wushu_expert_4000.pth task.env.motion_file=amp_humanoid_wushu_kick.npy task.env.showSkyWall=True test=True num_envs=1
Run Goal-Conditioned Fatigued Locomotion:

To run the pre-trained goal-conditioned fatigued locomotion with (F, R, r)=(1.0, 0.01, 1.0) use the command:

python train.py seed=537 task=HumanoidAMPHandsFatigue train=HumanoidAMPPPOLowGPFatigue headless=False num_envs=64 test=True checkpoint=runs/run_walk_goal/nn/run_walk_goal_4000.pth task.env.motion_file=amp_humanoid_walk_run.yaml task.env.useFatigue=True task.env.fatigueF=1.0 task.env.fatigueR=0.01 task.env.fatigue_r=1.0 train.params.config.task_reward_w=1.0 train.params.config.disc_grad_penalty=5.0 task.env.TargetObs=True task.env.visualizeFatigue=True task.env.showSkyWall=True

Note: Since this is a goal-conditioned task where the character runs to a certain target, task_reward_w should be greater than 0 (we used 1.0), and TargetObs needs to be set in order for the target be included in the observation space. For locomotion tasks we additionally set disc_grad_penalty=5.0 like in the original Isaac Gym Envs implementation for locomotion and hopping tasks.

Run Goal-Conditioned Fatigued Ant

To run the pre-trained goal-conditioned fatigued ant with (F, R, r)=(0.4, 0.1, 1.0) use the command:

python train.py task=AntFatigue task.env.fatigueF=0.4 task.env.fatigueR=0.1 task.env.fatigue_r=1.0 test=True num_envs=1 headless=False task.env.followCam=True checkpoint=runs/ant_fatigue_0.4_0.1_1.0/nn/ant_fatigue_0.4_0.1_1.0.pth task.env.showSkyWall=True

Training / Fine-tuning models

Train Tornado Kick Expert

To train an expert model, which can later be fine-tuned with fatigue run the following command:

python train.py seed=476 task=HumanoidAMPHandsFatigue train=HumanoidAMPPPOLowGPFatigue headless=True task.env.motion_file=amp_humanoid_wushu_kick.npy max_iterations=4000 task.env.randomizeFatigueProb=1 task.env.randomizeEveryStep=True experiment=wushu_expert

Note: For training or fine-tuning we set task.env.randomizeFatigueProb=1 to have a random fatigue initialization at every environment reset for maximizing diversity during training. Additionally, for expert training specifically task.env.randomizeEveryStep=True needs to be set so that the fatigued motor units MF in the observation space are random as no "fatigue" exists yet. This is similar to domain randomization.

Finetune Tornado-Kick Expert with Fatigue

To train an expert model, which can later be fine-tuned with fatigue run the following command:

python train.py seed=477 task=HumanoidAMPHandsFatigue train=HumanoidAMPPPOLowGPFatigue headless=True experiment=finetune_wushu_fatigue checkpoint=runs/wushu_expert/nn/wushu_expert_4000.pth task.env.motion_file=amp_humanoid_wushu_kick.npy max_iterations=6000 task.env.useFatigue=True task.env.randomizeFatigueProb=1

Note: For training or fine-tuning we set task.env.randomizeFatigueProb=1 to have a random fatigue initialization at every environment reset for maximizing diversity during training. Good (F, R, r)-values for inference for this motion are (2, 0.05, 1).

s Finetune Goal-Conditioned Locomotion with Fatigue

To fine-tune an expert model trained on the run motion with fatigue, we use the following command:

python train.py seed=537 task=HumanoidAMPHandsFatigue train=HumanoidAMPPPOLowGPFatigue headless=True experiment=finetune_locomotion_fatigue checkpoint=runs/run_goal_expert/nn/run_goal_expert_2000.pth task.env.motion_file=amp_humanoid_walk_run.yaml max_iterations=4000 task.env.useFatigue=True task.env.randomizeFatigueProb=1 train.params.config.task_reward_w=1.0 train.params.config.disc_grad_penalty=5.0 task.env.TargetObs=True

Note: Since this is a goal-conditioned task where the character runs to a certain target, task_reward_w should be greater than 0 (we used 1.0), and TargetObs needs to be set in order for the target be included in the observation space. For locomotion tasks we additionally set disc_grad_penalty=5.0 like in the original Isaac Gym Envs implementation for locomotion and hopping tasks. Futhermore, for training or fine-tuning we set task.env.randomizeFatigueProb=1 to have a random fatigue initialization at every environment reset for maximizing diversity during training. Also note that for fine-tuning we use the yaml file which includes different locomotion modes. It is interesting that to see that while the agent is fatigued, it automatically switches to the walking mode within a single policy.

Train Goal-Conditioned Fatigued Ant

To train the goal-conditioned fatigued ant use the command:

python train.py task=AntFatigue task.env.fatigueF=0.4 task.env.fatigueR=0.1 task.env.fatigue_r=1.0 task.env.randomizeFatigueProb=1 headless=True experiment=fatigued_ant

Note: For training or fine-tuning we set task.env.randomizeFatigueProb=1 to have a random fatigue initialization at every environment reset for maximizing diversity during training.

Visualizing the results in Unity

Stream from Isaac to Unity

To stream from Isaac to Unity and interactively change the (F, R, r)-values during inference set task.env.unity_stream=True with the command that you want to run from Isaac. In the unity_vis project use the "network_vis_isaac_streaming"-scene and hit play to visualize the Isaac output in real-time in Unity. You can use the sliders to change the (F, R, r)-parameters.

Record a JSON-file and play it in Unity

To record JSON-file with the trajectory of the character set task.env.record_poses=True. This will generate the JSON-file, which you can put into the "StreamingAssets" folder in the unity_vis project. Then in the scene "isaac_load_and_play" in the "AnimationPlayer"-object enter the json file name under Clipfilename and hit "Replace Animation" and then hit play.


Acknowledgments

The Unity visualization code is based on the repositories unity_socket_vis and anim_utils by Erik Herrmann.

About

This repository contains an implementation of the code for our SIGGRAPH Asia paper "Discovering Fatigued Animations for Virtual Character Animation".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published