Skip to content


Repository files navigation


This is the open source implementation of the representation learning and reinforcement learning method detailed in the following paper:

Marvin Zhang*, Sharad Vikram*, Laura Smith, Pieter Abbeel, Matthew J. Johnson, Sergey Levine.
SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning.
International Conference on Machine Learning (ICML), 2019.
Project webpage

For more information on the method, please refer to the paper, blog post and talk. For questions about this codebase, please contact the lead authors.


We use Pipenv for dependency management.

$ pipenv install

Important files

  • BLDS: parasol/prior/ implements our global dynamics prior
  • VAE: parasol/model/ is our model with modular priors including unit Gaussian and BLDS
  • LQRFLM: parasol/control/ implements our control method

Running experiments

We provide a full example of the reacher experiment with the following files:

  • python experiments/vae/reacher-image/ trains our model and saves the weights
  • python experiments/solar/reacher-image/ loads the weights and runs SOLAR

Training the model is computationally expensive, slow, and does not always produce consistent results. Because of this, we also include a pretrained model in data/vae/reacher-image/weights/model-final.pkl. These weights can be directly loaded and run with the second script. Please note that training a model from scratch may lead to different policy performance, and multiple model training runs may be needed in order to achieve good performance.

Note that this environment requires OpenAI Gym and related dependencies, such as MuJoCo and mujoco-py.


No description, website, or topics provided.







No releases published