Skip to content
/ jaxrl Public template
forked from ikostrikov/jaxrl

Jax (Flax) implementation of algorithms for Deep Reinforcement Learning with continuous action spaces.

License

Notifications You must be signed in to change notification settings

codeaudit/jaxrl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Jax (Flax) RL

This repository contains Jax (Flax) implementations of Reinforcement Learning algorithms:

The goal of this repository is to provide simple and clean implementations to build research on top of. Please do not use this repository for baseline results and use the original implementations instead (SAC, AWAC, DrQ).

Changelog

July 20th, 2021

July 19th, 2021

May 19th, 2021

April 29th, 2021

Installation

conda install patchelf
pip install --upgrade git+https://github.com/ikostrikov/jaxrl
# For GPU support run
pip install --upgrade jaxlib==0.1.67+cuda111 -f https://storage.googleapis.com/jax-releases/jax_releases.html

If you want to run this code on GPU, please follow instructions from the official repository.

Please follow the instructions to build mujoco-py with fast headless GPU rendering.

Development

If you want to modify the code, install following the instructions below.

conda install patchelf
pip install --upgrade -e .

Troubleshooting

If you experience out-of-memory errors, especially with enabled video saving, please consider reading docs on Jax GPU memory allocation. Also, you can try running with the following environment variable:

XLA_PYTHON_CLIENT_MEM_FRACTION=0.80 python ...

If you run your code on a remote machine and want to save videos for DeepMind Control Suite, please use EGL for rendering:

MUJOCO_GL=egl python train.py --env_name=cheetah-run --save_dir=./tmp/ --save_video

Tensorboard

Launch tensorboard to see training and evaluation logs

tensorboard --logdir=./tmp/

Results

Continous control from states

gym

Continous control from pixels

gym

Docker

Build

Copy your MuJoCo key to ./vendor

cd remote
docker build -t ikostrikov/jaxrl . -f Dockerfile 

Test

docker run -v <examples-dir>:/jaxrl/ --gpus=all ikostrikov/jaxrl:latest python /jaxrl/train.py --env_name=HalfCheetah-v2 --save_dir=/jaxrl/tmp/

Contributing

When contributing to this repository, please first discuss the change you wish to make via issue. If you are not familiar with pull requests, please read this documentation.

About

Jax (Flax) implementation of algorithms for Deep Reinforcement Learning with continuous action spaces.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 92.5%
  • Python 7.2%
  • Other 0.3%