Skip to content

ido90/RoML-pearl

Repository files navigation

Robust Meta Reinforcement Learning (RoML) with PEARL

The paper Train Hard, Fight Easy: Robust Meta Reinforcement Learning introduces RoML - a meta-algorithm that takes any meta-learning baseline algorithm and generates a robust version of it. This repo implements RoML on top of the official implementation of PEARL algorithm for meta reinforcement learning.

To implement RoML we changed the tasks sampling procedure, by adding the file cross_entropy_sampler.py and using it in rlkit/core/rl_algorithm.py (search for "cem" in rl_algorithm.py to see the modifications).

See here more details about what is RoML and how to use it in general, as well as implementation on top of other baselines.

How to run?

The API is identical to the original repo of PEARL, with the additional flag use_cem, which switches between RoML and the PEARL baseline. To reproduce the experiments in our paper, run:

python launch_experiment.py --config configs/ENV.json --seed SEED --use_cem IS_CEM

where SEED is an integer, IS_CEM is either 0 or 1, and ENV is the desired environment: cheetah-vel, cheetah-mass or cheetah-body. The results can be processed as shown in the notebook RoML-PEARL-Mujoco.ipynb.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published