Skip to content

RajGhugare19/alm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Aligned Latent Models (ALMs)

Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, and Ruslan Salakhutdinov.

Installation

Install MuJoCo version mjpro150 binaries from their website. Extract the downloaded mjpro150 directory into ~/.mujoco/. Download the free activation key from here and place it in ~/.mujoco/. Add the following lines in ~/.bashrc and then source it.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/.mujoco/mjpro150/bin

If you are using the latest versions of MuJoCo (> 2.0), it is possible that it might produce inaccurate or zero contact forces in the Humanoid-v2 and Ant-v2 environments. See #2593, #1541 and #1636. If you encounter any errors, check the troubleshooting section of mujoco-py.

Create virtual environment named env_alm using command:

python3 -m venv env_alm

Install all the packages used to run the code using the requirements.txt file:

pip install -r requirements.txt

These instructions are for code that was tested to run on Ubuntu 22.04 with Python 3.10.4.

Training

To train an ALM agent on Humanoid-v2 environment:

python train.py id=Humanoid-v2

Log training and evaluation details using wandb:

python train.py id=Humanoid-v2 wandb_log=True

To perform the bias evaluation experiments from our paper:

python train.py id=Humanoid-v2 eval_bias=True

Acknowledgment

Our codebase has been build using/on top of the following codes. We thank the respective authors for their awesome contributions.

About

Simplifying Model-based RL: Learning Representations, Latent-space Models and Policies with One Objective

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages