Old code below
- Vanilla PPO
- Encoder, MLP Transition + Reward Models
- Encoder, Conv Transition + Reward Models
- Encoder, Conv Transition + Factored Reward Models
- Encoder, Conv Transition + Factored Reward Models
- Graph Neural Networks via this paper and this code
This is a PyTorch implementation of the methods proposed in
Automatic Data Augmentation for Generalization in Deep Reinforcement Learning by
Roberta Raileanu, Max Goldstein, Denis Yarats, Ilya Kostrikov, and Rob Fergus.
The code was run on a GPU with CUDA 10.2. To install all the required dependencies:
conda create -n auto-drac python=3.7
conda activate auto-drac
git clone git@github.com:rraileanu/auto-drac.git
cd auto-drac
pip install -r requirements.txt
git clone https://github.com/openai/baselines.git
cd baselines
python setup.py install
pip install procgen
cd auto-drac
python train.py --env_name bigfish --aug_type crop
python train.py --env_name bigfish --use_ucb
python train.py --env_name bigfish --use_rl2
python train.py --env_name bigfish --use_meta
UCB-DrAC achieves state-of-the-art performance on the Procgen benchmark (easy mode), significantly improving the agent's generalization ability over standard RL methods such as PPO.
Test Results on Procgen
Train Results on Procgen
This code was based on an open sourced PyTorch implementation of PPO.
We also used kornia for some of the augmentations.