Skip to content
MetaGenRL, a novel meta reinforcement learning algorithm. Unlike prior work, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training.
Python
Branch: master
Clone or download
Latest commit 9763b0d Oct 10, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore Initial commit Oct 9, 2019
README.md
model.py
ray_configs.py
ray_experiments.py Initial commit Oct 9, 2019
ray_extensions.py
ray_workers.py
test_agentWorker.py
test_experiment.py
tflog_utils.py
utils.py

README.md

MetaGenRL: Improving Generalization in Meta Reinforcement Learning using Learned Objectives

This is the official research code for the paper Kirsch et al. 2019: "Improving Generalization in Meta Reinforcement Learning using Learned Objectives".

Installation

Install the following dependencies (in a virtualenv preferably)

pip3 install ray gym[all] tensorflow-gpu scipy numpy

This code base uses ray, if you would like to use multiple machines, see the ray documentation for details.

We also make use of ray's native tensorflow ops. Please compile them by running

python3 -c 'import ray; from pyarrow import plasma as plasma; plasma.build_plasma_tensorflow_op()'

Meta Training

Adapt the configuration in ray_experiments.py (or use the default configuration) and run

python3 ray_experiments.py train

By default, this requires a local machine with 4 GPUs to run 20 agents in parallel. Alternatively, skip this and download a pre-trained objective function as described below.

Meta Testing

After running meta-training (or downloading a pre-trained objective function) you can train a new agent from scratch on an environment of your choice. Optionally configure your training in ray_experiments.py, then run

python3 ray_experiments.py test --objective TRAINING_DIRECTORY

This only requires a single GPU on your machine.

Using a pre-trained objective function

Download a pre-trained objective function,

cd ~/ray_results/metagenrl
curl URL_COMING_SOON|tar xvz

and proceed with meta testing as above. In this case your TRAINING_DIRECTORY will be pretrained-CheetahLunar.

Visualization

Many tf summaries are written during training and testing and can be visualized with tensorboard

tensorboard --logdir ~/ray_results/metagenrl
You can’t perform that action at this time.