Motivation: I have always thought that the only way to truely test if you understand a concept is to see if you can build it. As such all these these algorithms are implemented studying the relevant papers and coded to test my understanding.
What I cannot create, I do not understand” - Richard Feynman
- Vanilla DQN
- Noisy DQN
- Dualing DQN
- Double DQN
- Prioritiesed Experience Replay DQN
- Rainbow DQN
- Advantage Actor Critic (A2C) - single environment
- Advantage Actor Critic (A2C) - multi environment
- Deep Deterministic Policy Gradients
- Proximal Policy Optimisation (discrete and continuous)
These were mainly referenced from a really good lecture series by Colin Skow on youtube [link]. A large part was also found in the Deep Reinforcement Learning Udacity course.
- Bellman Equation
- Dynamic Programming
- Q learning
- Tutorial on PPO: A Graphic Guide to Implementing PPO for Atari Games
- Converged to an average of 17.56 after 1300 Episodes.
- Code and results can be found under
DQN/7. Vanilla DQN Atari.ipynb
- Converged to ~ -270 after a 100 episodes
- Code and results can be found under
Policy Gradient/4. DDPG.ipynb.ipynb
- Solved in 409 episodes
- Code and results can be found under
Policy Gradient/5. PPO.ipynb
- Code and results can be found under
PPO/
- Curiousity Driven Exploration
- HER (Hindsight Experience Replay)
- Recurrent networks in PPO and DDPG
Whilst I tried to code everything directly from the papers, it wasn't always easy to understand what I was doing wrong when the algorithm just wouldn't train or I got runtime errors. As such I used the following repositories as references.