These are experiments for examining reproducibility in Policy Gradient RL algorithms in Continuous domains. Mainly using the Rllab implementation.
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
ddpg_tensorflow
reproducibility_ML_DDPG ddpg reproducibility experiments Jun 4, 2017
.gitignore
README.md
ave_results.py
from_same_dist.py
plot_results.py
run_trpo.py Add random seed as parameter Jun 15, 2017
sampling_utils.py

README.md

Reproducibility of Benchmarked Deep Reinforcement Learning Tasks for Continuous Control

Policy gradient methods in reinforcement learning have become increasingly prevalent for state-of-the-art performance in continuous control tasks. Novel methods typically benchmark against a few key algorithms such as deep deterministic policy gradients and trust region policy optimization. As such, it is important to present and use consistent baselines experiments. However, this can be difficult due to general variance in the algorithms, hyper-parameter tuning, and environment stochasticity. We investigate and discuss: the significance of hyper-parameters in policy gradients for continuous control, general variance in the algorithms, and reproducibility of reported results. We provide guidelines on reporting novel results as comparisons against baseline methods such that future researchers can make informed decisions when investigating novel methods.

Citation

@article{islam2017reproducibility,
  title={Reproducibility of Benchmarked Deep Reinforcement Learning Tasks for Continuous Control},
  author={Islam*, Riashat and Henderson*, Peter and Gomrokchi, Maziar and Precup, Doina},
  journal={ICML 2017 Reproducibility in Machine Learning Workshop},
  year={2017},
  url={https://arxiv.org/pdf/1708.04133.pdf}
}

References

Here, we use the rllab implementation of various benchmark algorithms.