Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Double-Dueling-DQN stops learning #63

Open
florath opened this issue Mar 29, 2018 · 0 comments
Open

Double-Dueling-DQN stops learning #63

florath opened this issue Mar 29, 2018 · 0 comments

Comments

@florath
Copy link

florath commented Mar 29, 2018

Running the Double-Dueling-DQN code results in a network, that stops learning after about 2000 episodes., i.e. the game results do not get better. Run the GridWorld example now four different times - and tried to adapt parameters: all result in mostly the same picture. The network has some 'good' learning curve at the beginning and then stops learning.
For some results see:

  1. run001
  2. run002
  3. run003
  4. run004

I also started using Breakout-v0 - with mostly the same result.

Does anybody have an idea? Which parameters can be adapted to get better results?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant