PyTorch implementation of Trust Region Policy Optimization
Clone or download
Latest commit e200eb8 Sep 13, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information. Create May 26, 2017 Update May 19, 2018 Initial commit May 26, 2017 Update to pytorch 0.4 Sep 13, 2018 Update to pytorch 0.4 Sep 13, 2018 Initial commit May 26, 2017 Initial commit May 26, 2017 Update to pytorch 0.4 Sep 13, 2018 Fix bugs introduces with keepdim Oct 7, 2017

PyTorch implementation of TRPO

Try my implementation of PPO (aka newer better variant of TRPO), unless you need to you TRPO for some specific reasons.

This is a PyTorch implementation of "Trust Region Policy Optimization (TRPO)".

This is code mostly ported from original implementation by John Schulman. In contrast to another implementation of TRPO in PyTorch, this implementation uses exact Hessian-vector product instead of finite differences approximation.


Contributions are very welcome. If you know how to make this code better, don't hesitate to send a pull request.


python --env-name "Reacher-v1"

Recommended hyper parameters

InvertedPendulum-v1: 5000

Reacher-v1, InvertedDoublePendulum-v1: 15000

HalfCheetah-v1, Hopper-v1, Swimmer-v1, Walker2d-v1: 25000

Ant-v1, Humanoid-v1: 50000


More or less similar to the original code. Coming soon.


  • Plots.
  • Collect data in multiple threads.