Skip to content
Taming MAML: efficient unbiased meta-reinforcement learning
Branch: master
Clone or download

Taming MAML: Efficient unbiased meta-reinforcement learning

Reference Tensorflow implementation of Taming MAML: Efficient unbiased meta-reinforcement learning. We will release Pytorch version later.

Getting started

You can use Dockerfile to build an image with conda environment called tmaml included, activating this conda env:

conda activate tmaml

you can also use tmaml.yml to create a conda env called tmaml.

conda env create -f tmaml.yml

then activate this conda env

conda activate tmaml


You can use the , and scripts in order to run reinforcement learning experiments with different algorithm. MAML:

python --env HalfCheetahRandDirecEnv


python --env HalfCheetahRandDirecEnv


python --env HalfCheetahRandDirecEnv


To cite TMAML please use

  title = 	 {Taming {MAML}: Efficient unbiased meta-reinforcement learning},
  author = 	 {Liu, Hao and Socher, Richard and Xiong, Caiming},
  booktitle = 	 {Proceedings of the 36th International Conference on Machine Learning},
  pages = 	 {4061--4071},
  year = 	 {2019},
  editor = 	 {Chaudhuri, Kamalika and Salakhutdinov, Ruslan},
  volume = 	 {97},
  series = 	 {Proceedings of Machine Learning Research},
  address = 	 {Long Beach, California, USA},
  month = 	 {09--15 Jun},
  publisher = 	 {PMLR},


  • Adding TMAML
  • Adding MAML
  • Adding DICE
  • Benchmarking
  • Pytorch version


This repository is based on ProMP repo.

You can’t perform that action at this time.