Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing should be not deterministic #12

Closed
marintoro opened this issue Jan 17, 2018 · 8 comments
Closed

Testing should be not deterministic #12

marintoro opened this issue Jan 17, 2018 · 8 comments

Comments

@marintoro
Copy link

There is a parameter --evaluation-episodes but in the current implementation, like we are always acting greedly, all the episodes are going to be exactly the same. I think that to get a better testing evaluation, you should add a deterministic=False when you are testing (i.e. in stead of taking the action with the higher Q value, you can sample on all the action with each Q value as the probability) .

I implemented that on my branch on the last commit marintoro/Rainbow@d061caf (it's really straightforward)

Btw I launched a training last night, everything worked properly. But I don't have access to a powerfull computer yet so the agent was still pretty poor in performance (in the early stage of training). I just wanted to know if you already launched a big training, on which game and if you compared it to a standard DRL algo (like simple DQN for example)?
Because there may still be some non-breaking errors in the implementation which could be sneaky to spot and debug (I mean if the agent is learning worse than simple DQN, there must be something wrong for example).

@Kaixhin
Copy link
Owner

Kaixhin commented Jan 17, 2018

I set off a run on Space Invaders last night - it's one where Rainbow is clearly better than alternatives, but it'll take a few days for it to get to the point where I can tell if that's the case or not. Out of the previous runs I've made, making sure that transitions next to the buffer aren't sampled seemed like an important fix, but I've never run anything for that long. You can have a look at the training curves in the paper to see if any other game might be useful to look at.

Non-deterministic evaluation does sound good, but I'm wondering why the random no-ops in the environment wouldn't provide a "stochastic" environment. It could well be that it's just not providing enough stochasticity. Also, not sure if sampling via Q-values or simply taking a new draw of weights via the NoisyLinear layers is the better way to go?

@marintoro
Copy link
Author

Ok. On my side I will launch a training on Breakout for a little sanity check.
Cause I think it's easier to see if the agent really learned something or just play randomly, indeed in Space Invaders it's pretty easy to be convinced that a full random agent is playing pretty well ^^

Concerning the sampling via Q-values or just taking new weights for the Noisy layers, I really don't know, we should maybe try both to compare (Q-values sampling may lead to way too much exploration but on the other hand in the late stage of training, the agent may have learn to ignore all incoming noise from the Noisy layers...).

@Kaixhin
Copy link
Owner

Kaixhin commented Jan 17, 2018

I think it's difficult for a random agent to do really well at Space Invaders. In any case I plot Q-values on a held-out validation memory, and that's somewhat informative as to learning. Let me know how sampling Q-values goes - I've had a skim through the DM papers and they seem to average results over many testing episodes, but I'm not sure I see anything different about NoisyNet evaluation - without it you'd do a random action uniformly with a very low probability.

@marintoro
Copy link
Author

So are your training on space invaders doing better than just a random agent now? ^^
The one I launched last night on Breakout didn't succeed to learn anything (but I think I had maybe some error with the reset fonction for Breakout).
I am launching now on Pong to really sanity check if my agent can learn anything at all.

@Kaixhin
Copy link
Owner

Kaixhin commented Jan 19, 2018

Looks reasonable so far. The Q-values increased rapidly, and have now stabilised (looking very similar to the values of the Double DQN).

newplot 1

The reward itself is clearly increasing (noisily, but at a reasonable level - not one at which I'd say there's definitely a problem). It's pretty much at the level of a trained Double DQN at about 1/3 of the training steps - but of course according to the Rainbow paper the score only really takes off after the halfway mark (and even then many runs may not work out so well, so even if this fails after the full run it's unfortunately not conclusive).

newplot

@marintoro
Copy link
Author

Hum ok that seems really nice to me and definitely working! (did you add not deterministic test, like by using new weights in the Noisy layers?)
I just did 5M steps on Breakout maybe it wasn't enough to see any progress at all (or maybe I just have some bugs, I will look further on this next week).

@Kaixhin
Copy link
Owner

Kaixhin commented Jan 19, 2018

I ran this as soon as I got in the last few fixes, so testing is completely deterministic.

If DM followed previous evaluation protocol, then we should actually use an ε-greedy policy with ε = 0.001 (the below quote is from the Double DQN paper on DQN evaluation but they later on mention using a lower ε):

The learned policies are evaluated for 5 mins of emulator time (18,000 frames) with an ε-greedy policy where ε = 0.05. The scores are averaged over 100 episodes.

So if you're able to do quick tests (perhaps on Pong) for evaluation, the first thing is to see if using 100 instead of 10 evaluation episodes does introduce some variance. Otherwise, given how it is trained to maximise reward even with noisy layers, taking different draws of weights seems like a better (albeit non-backwards-compatible) way of evaluating the network.

@Kaixhin
Copy link
Owner

Kaixhin commented Jan 20, 2018

Closing this issue as injecting even a small amount of noise via ε-greedy gives a sufficient distribution over test performance, and it is (AFAIK) DM's standard method of evaluating DQN variants.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants