Skip to content
Code for the paper "Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks"
Python
Branch: master
Clone or download

Latest commit

Latest commit ad6ca97 Nov 21, 2018

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
algos release Jun 11, 2016
dynamics release Jun 11, 2016
envs release Jun 11, 2016
experiments release Jun 11, 2016
sampler bugfix sampler Jun 20, 2016
.gitignore release Jun 11, 2016
README.md update README with repo status Nov 21, 2018
__init__.py release Jun 11, 2016

README.md

Status: Archive (code is provided as-is, no updates expected)

How to run VIME

Variational Information Maximizing Exploration (VIME) as presented in Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks by R. Houthooft, X. Chen, Y. Duan, J. Schulman, F. De Turck, P. Abbeel (http://arxiv.org/abs/1605.09674).

To reproduce the results, you should first have rllab and Mujoco v1.31 configured. Then, run the following commands in the root folder of rllab:

git submodule add -f git@github.com:openai/vime.git sandbox/vime
touch sandbox/__init__.py

Then you can do the following:

  • Execute TRPO+VIME on the hierarchical SwimmerGather environment via python sandbox/vime/experiments/run_trpo_expl.py.
You can’t perform that action at this time.