Skip to content
Code for the paper "When to Trust Your Model: Model-Based Policy Optimization"
Branch: master
Clone or download
Latest commit ad1a808 Sep 20, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
environment pin mujoco Sep 20, 2019
examples cleanup Jul 30, 2019
mbpo cleaned max_model_t Aug 6, 2019
softlearning initial release commit Jul 29, 2019
viskit @ d7c153a submodule viskit Jul 29, 2019
.gitignore initial release commit Jul 29, 2019
.gitmodules initial release commit Jul 29, 2019 readme Sep 21, 2019 initial release commit Jul 29, 2019

Model-Based Policy Optimization

Code to reproduce the experiments in When to Trust Your Model: Model-Based Policy Optimization.


  1. Install MuJoCo 1.50 at ~/.mujoco/mjpro150 and copy your license key to ~/.mujoco/mjkey.txt
  2. Clone mbpo
git clone --recursive
  1. Create a conda environment and install mbpo
cd mbpo
conda env create -f environment/gpu-env.yml
conda activate mbpo
pip install -e viskit
pip install -e .


Configuration files can be found in examples/config/.

mbpo run_local examples.development --config=examples.config.halfcheetah.0 --gpus=1 --trial-gpus=1

Currently only running locally is supported.

New environments

To run on a different environment, you can modify the provided template. You will also need to provide the termination function for the environment in mbpo/static. If you name the file the lowercase version of the environment name, it will be found automatically. See for an example.


This codebase contains viskit as a submodule. You can view saved runs with:

viskit ~/ray_mbpo --port 6008

assuming you used the default log_dir.


The rollout length schedule is defined by a length-4 list in a config file. The format is [start_epoch, end_epoch, start_length, end_length], so the following:

'rollout_schedule': [20, 100, 1, 5] 

corresponds to a model rollout length linearly increasing from 1 to 5 over epochs 20 to 100.

If you want to speed up training in terms of wall clock time (but possibly make the runs less sample-efficient), you can set a timeout for model training (max_model_t, in seconds) or train the model less frequently (every model_train_freq steps).

Note: This repo contains ongoing research. Minor differences between this code and the paper will be updated in v2.

Comparing to MBPO

If you would like to compare to MBPO but do not have the resources to re-run all experiments, the learning curves found in Figure 2 of the paper (plus on the Humanoid environment) are available in this shared folder. See for an example of how to read the pickle files with the results.


If you find this code useful in an academic setting, please cite:

  author = {Michael Janner and Justin Fu and Marvin Zhang and Sergey Levine},
  title = {When to Trust Your Model: Model-Based Policy Optimization},
  journal = {arXiv preprint arXiv:1906.08253},
  year = {2019}


The underlying soft actor-critic implementation in MBPO comes from Tuomas Haarnoja and Kristian Hartikainen's softlearning codebase. The modeling code is a slightly modified version of Kurtland Chua's PETS implementation.

You can’t perform that action at this time.