Permalink
Branch: master
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
211 lines (137 sloc) 10.4 KB

Trust Region Policy Optimization

Background

(Previously: Background for VPG)

TRPO updates policies by taking the largest step possible to improve performance, while satisfying a special constraint on how close the new and old policies are allowed to be. The constraint is expressed in terms of KL-Divergence, a measure of (something like, but not exactly) distance between probability distributions.

This is different from normal policy gradient, which keeps new and old policies close in parameter space. But even seemingly small differences in parameter space can have very large differences in performance---so a single bad step can collapse the policy performance. This makes it dangerous to use large step sizes with vanilla policy gradients, thus hurting its sample efficiency. TRPO nicely avoids this kind of collapse, and tends to quickly and monotonically improve performance.

Quick Facts

  • TRPO is an on-policy algorithm.
  • TRPO can be used for environments with either discrete or continuous action spaces.
  • The Spinning Up implementation of TRPO supports parallelization with MPI.

Key Equations

Let \pi_{\theta} denote a policy with parameters \theta. The theoretical TRPO update is:

\theta_{k+1} = \arg \max_{\theta} \; & {\mathcal L}(\theta_k, \theta) \\
\text{s.t.} \; & \bar{D}_{KL}(\theta || \theta_k) \leq \delta

where {\mathcal L}(\theta_k, \theta) is the surrogate advantage, a measure of how policy \pi_{\theta} performs relative to the old policy \pi_{\theta_k} using data from the old policy:

{\mathcal L}(\theta_k, \theta) = \underE{s,a \sim \pi_{\theta_k}}{
    \frac{\pi_{\theta}(a|s)}{\pi_{\theta_k}(a|s)} A^{\pi_{\theta_k}}(s,a)
    },

and \bar{D}_{KL}(\theta || \theta_k) is an average KL-divergence between policies across states visited by the old policy:

\bar{D}_{KL}(\theta || \theta_k) = \underE{s \sim \pi_{\theta_k}}{
    D_{KL}\left(\pi_{\theta}(\cdot|s) || \pi_{\theta_k} (\cdot|s) \right)
}.

You Should Know

The objective and constraint are both zero when \theta = \theta_k. Furthermore, the gradient of the constraint with respect to \theta is zero when \theta = \theta_k. Proving these facts requires some subtle command of the relevant math---it's an exercise worth doing, whenever you feel ready!

The theoretical TRPO update isn't the easiest to work with, so TRPO makes some approximations to get an answer quickly. We Taylor expand the objective and constraint to leading order around \theta_k:

{\mathcal L}(\theta_k, \theta) &\approx g^T (\theta - \theta_k) \\
\bar{D}_{KL}(\theta || \theta_k) & \approx \frac{1}{2} (\theta - \theta_k)^T H (\theta - \theta_k)

resulting in an approximate optimization problem,

\theta_{k+1} = \arg \max_{\theta} \; & g^T (\theta - \theta_k) \\
\text{s.t.} \; & \frac{1}{2} (\theta - \theta_k)^T H (\theta - \theta_k) \leq \delta.

You Should Know

By happy coincidence, the gradient g of the surrogate advantage function with respect to \theta, evaluated at \theta = \theta_k, is exactly equal to the policy gradient, \nabla_{\theta} J(\pi_{\theta})! Try proving this, if you feel comfortable diving into the math.

This approximate problem can be analytically solved by the methods of Lagrangian duality [1], yielding the solution:

\theta_{k+1} = \theta_k + \sqrt{\frac{2 \delta}{g^T H^{-1} g}} H^{-1} g.

If we were to stop here, and just use this final result, the algorithm would be exactly calculating the Natural Policy Gradient. A problem is that, due to the approximation errors introduced by the Taylor expansion, this may not satisfy the KL constraint, or actually improve the surrogate advantage. TRPO adds a modification to this update rule: a backtracking line search,

\theta_{k+1} = \theta_k + \alpha^j \sqrt{\frac{2 \delta}{g^T H^{-1} g}} H^{-1} g,

where \alpha \in (0,1) is the backtracking coefficient, and j is the smallest nonnegative integer such that \pi_{\theta_{k+1}} satisfies the KL constraint and produces a positive surrogate advantage.

Lastly: computing and storing the matrix inverse, H^{-1}, is painfully expensive when dealing with neural network policies with thousands or millions of parameters. TRPO sidesteps the issue by using the conjugate gradient algorithm to solve Hx = g for x = H^{-1} g, requiring only a function which can compute the matrix-vector product Hx instead of computing and storing the whole matrix H directly. This is not too hard to do: we set up a symbolic operation to calculate

Hx = \nabla_{\theta} \left( \left(\nabla_{\theta} \bar{D}_{KL}(\theta || \theta_k)\right)^T x \right),

which gives us the correct output without computing the whole matrix.

[1]See Convex Optimization by Boyd and Vandenberghe, especially chapters 2 through 5.

Exploration vs. Exploitation

TRPO trains a stochastic policy in an on-policy way. This means that it explores by sampling actions according to the latest version of its stochastic policy. The amount of randomness in action selection depends on both initial conditions and the training procedure. Over the course of training, the policy typically becomes progressively less random, as the update rule encourages it to exploit rewards that it has already found. This may cause the policy to get trapped in local optima.

Pseudocode

Documentation

.. autofunction:: spinup.trpo


Saved Model Contents

The computation graph saved by the logger includes:

Key Value
x Tensorflow placeholder for state input.
pi Samples an action from the agent, conditioned on states in x.
v Gives value estimate for states in x.

This saved model can be accessed either by

References

Relevant Papers

Why These Papers?

Schulman 2015 is included because it is the original paper describing TRPO. Schulman 2016 is included because our implementation of TRPO makes use of Generalized Advantage Estimation for computing the policy gradient. Kakade and Langford 2002 is included because it contains theoretical results which motivate and deeply connect to the theoretical foundations of TRPO.

Other Public Implementations