Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate occasional NaN in TRPO #26

Closed
michaelschaarschmidt opened this issue Jul 13, 2017 · 3 comments
Closed

Investigate occasional NaN in TRPO #26

michaelschaarschmidt opened this issue Jul 13, 2017 · 3 comments
Assignees

Comments

@michaelschaarschmidt
Copy link
Contributor

TRPO occasionally fails to produce a robust update with the langrange multiplier being None, need to check if gradient computation can produce None

@trickmeyer
Copy link
Contributor

trickmeyer commented Jul 14, 2017

@michaelschaarschmidt w.r.t. your comment in gitter:

in general, TRPO has two potential instabilities, the gradient computation on the fvp and the conjugate gradient, but the cg + linesearch should fail gracefully by not updating when it fails to find an improved solution

I'm not seeing it fail gracefully when this is hit. It seems as though a NaN or other unstable update may be making its way through the graph update, as I see my agent behavior change significantly whenever this is encountered.

/home/tom/src/tensorforce/tensorforce/models/trpo_model.py:161: RuntimeWarning: invalid value encountered in sqrt
  lagrange_multiplier = np.sqrt(shs / self.max_kl_divergence)

Here are some results from my custom environment (do nothing is zero reward), which caps episodes at 100 steps. The agent essentially stops acting after encountering this:

Finished episode 2060 after 26 timesteps (reward: 2.13)
Finished episode 2061 after 5 timesteps (reward: 2.02)
Finished episode 2062 after 17 timesteps (reward: 2.09)
Finished episode 2063 after 65 timesteps (reward: 2.53)
Finished episode 2064 after 100 timesteps (reward: -4.03)
Finished episode 2065 after 21 timesteps (reward: 2.08)
Finished episode 2066 after 11 timesteps (reward: 2.03)
Finished episode 2067 after 28 timesteps (reward: 2.08)
Finished episode 2068 after 21 timesteps (reward: 3.2)
Finished episode 2069 after 5 timesteps (reward: 2.03)
Finished episode 2070 after 53 timesteps (reward: 2.11)
Finished episode 2071 after 15 timesteps (reward: 2.02)
Finished episode 2072 after 8 timesteps (reward: 2.02)
Finished episode 2073 after 26 timesteps (reward: 3.1)
Finished episode 2074 after 4 timesteps (reward: 2.03)
Finished episode 2075 after 100 timesteps (reward: -8.07)
Finished episode 2076 after 26 timesteps (reward: 2.25)
Finished episode 2077 after 6 timesteps (reward: 3.05)
Finished episode 2078 after 14 timesteps (reward: 3.07)
Finished episode 2079 after 54 timesteps (reward: 4.01)
Finished episode 2080 after 11 timesteps (reward: 3.04)
Finished episode 2081 after 100 timesteps (reward: -13.16)
Finished episode 2082 after 63 timesteps (reward: 2.37)
Finished episode 2083 after 18 timesteps (reward: 3.05)
Finished episode 2084 after 27 timesteps (reward: 2.02)
Finished episode 2085 after 3 timesteps (reward: 2.02)
/home/tom/src/tensorforce/tensorforce/models/trpo_model.py:161: RuntimeWarning: invalid value encountered in sqrt
lagrange_multiplier = np.sqrt(shs / self.max_kl_divergence)
Finished episode 2086 after 100 timesteps (reward: 4.0)
Finished episode 2087 after 100 timesteps (reward: 0.0)
Finished episode 2088 after 100 timesteps (reward: 0.0)
Finished episode 2089 after 100 timesteps (reward: 0.0)
Finished episode 2090 after 100 timesteps (reward: 0.0)
Finished episode 2091 after 100 timesteps (reward: 0.0)
Finished episode 2092 after 100 timesteps (reward: 0.0)
Finished episode 2093 after 100 timesteps (reward: 0.0)
Finished episode 2094 after 100 timesteps (reward: 0.0)
Finished episode 2095 after 100 timesteps (reward: 0.0)

@befelix
Copy link
Contributor

befelix commented Jul 14, 2017

So the easiest hack that works for now is to just check in the code whether shs is smaller than zero and if it is ignore the batch. It's not a permanent solution, but it makes the algorithm usable.

Another thing I noticed in continuous state spaces is that the standard deviation of the Gaussian (exploration) noise is not parameterized. That seems like a bad default for this kind of on-policy method. It's an easy fix since the required code in the Gaussian class is just commented out, but enabling this does not seem possible without low-level adjustments at the moment.

@michaelschaarschmidt
Copy link
Contributor Author

So I have a heard time reliably reproducing this (saw it once in 20 runs on 3.6, never in 2.7), so difficult to debug. Skipping update when shs < 0 now in any case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants