Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add reinforcement learning #175

Merged
merged 15 commits into from
Nov 1, 2019
Merged

Add reinforcement learning #175

merged 15 commits into from
Nov 1, 2019

Conversation

raimannma
Copy link
Member

Fully implemented DQN

More test methods are needed!

raimannma and others added 15 commits October 28, 2019 19:55
adding a new util-class "Window", works similar as the Java ArrayDeque
add training mode for the DQN
much improvement in performance
allow mulit-hidden layers
The sign for the epsilon action comparison was reversed. Epsilon should begin as a high number which will lead to high exploration at first. By exploring at a high rate the agent can discover new states instead of following its judgement (poor at first) about what the best state is and with a decay function over time epsilon is decreased which then leads the agent to trust its judgement more as it has more experiences.
To be as beginner friendly as possible, we change 'epsilon'-based property names to 'explore'. We do this because the people aware of epsilon's meaning should be able to know that explore is an equivalent term, but the reverse may not be true
To be as beginner friendly as possible, we change 'epsilon'-based property names to 'explore'. We do this because the people aware of epsilon's meaning should be able to know that explore is an equivalent term, but the reverse may not be true
@christianechevarria christianechevarria merged commit 1b27357 into liquidcarrot:add-reinforcement-learning Nov 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants