Skip to content

zaffnet/60_Days_RL_Challenge

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

82 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation


I designed this Challenge for you and me: Learn Deep Reinforcement Learning in Depth in 60 days!!

You heard about the amazing results achieved by Deepmind with AlphaGo Zero and by OpenAI in Dota 2! Don't you want to know how they work? This is the right opportunity for you and me to finally learn Deep RL and use it on new exciting projects.

The ultimate aim is to use these general-purpose technologies and apply them to all sorts of important real world problems. Demis Hassabis

This repository wants to guide you through the Deep Reinforcement Learning algorithms, from the most basic ones to the highly advanced AlphaGo Zero. You will find the main topics organized by week and the resources suggested to learn them. Also, every week I will provide practical examples implemented in python to help you better digest the theory. You are highly encouraged to modify and play with them!


During the whole challenge, I will update continuously this repository..

.. so stay tuned Twitter Follow GitHub followers

#60DaysRLChallenge

Now we have also a Slack channel. To get an invitation, email me at andrea.lonza@gmail.com

This is my first project of this kind, so please, if you have any idea, suggestion or improvement contact me at andrea.lonza@gmail.com.


Prerequisites

  • Basic level of Python and PyTorch
  • Machine Learning
  • Basic knowledge in Deep Learning (MLP, CNN and RNN)

Projects (Yet to decide)

  • Q-learning
  • DQN
  • AC2
  • ES
  • AlphaGo Zero

Week 1 - Introduction


Suggested


Week 2 - RL Basics: MDP, Dynamic Programming and Model-Free Control

Those who cannot remember the past are condemned to repeat it - George Santayana

This week, we will learn about the basic blocks of reinforcement learning, starting from the definition of the problem all the way through the estimation and optimization of the functions that are used to express the quality of a policy or state.


Theoretical material

  • Model-Free Control RL by David Silver
    • ฦ-greedy policy iteration
    • GLIE Monte Carlo Search
    • SARSA
    • Importance Sampling

Project of the Week

Q-learning applied to FrozenLake. For exercise, you can solve the game using SARSA or implement Q-learning by yourself. In the former case, only few changes are needed.


To know more


Week 3 - Value Function Approximation and DQN

This week we'll learn more advanced concepts and apply deep neural network to Q-learning algorithms.


Theoretical material

Lectures

Papers

Must Read
Extensions of DQN

Project of the Week

DQN and some variants applied to Pong

This week the goal is to develop a DQN algorithm to play an Atari game. To make it more interesting I developed three extensions of DQN: Double Q-learning, Multi-step learning, Dueling networks and Noisy Nets. Play with them, and if you feel confident, you can implement Prioritized replay, Dueling networks or Distributional RL. To know more about these improvements read the papers!


Suggested


Week 4 - Policy gradient methods and A2C

Week 4 introduce Policy Gradient methods, a class of algorithms that optimize directly the policy. Also, you'll learn about Actor-Critic algorithms. These algorithms combine both policy gradient (the actor) and value function (the critic).


Theoretical material

Lectures

  • Policy gradient Methods - RL by David Silver
    • Finite Difference Policy Gradient
    • Monte-Carlo Policy Gradient
    • Actor-Critic Policy Gradient
  • Policy gradient intro - CS294-112 by Sergey Levine (RECAP, optional)
    • Policy Gradient (REINFORCE and Vanilla PG)
    • Variance reduction
  • Actor-Critic - CS294-112 by Sergey Levine (More in depth)
    • Actor-Critic
    • Discout factor
    • Actor-Critic algorithm design (batch mode or online)
    • state-dependent baseline

Papers


Project of the Week

Vanilla PG and A2C The exercise of this week is to implement a policy gradient method or a more sophisticated actor-critic. In the repository you can find an implemented version of PG and A2C. Pay attention that A2C give me strange result. You can try to make it works or implement an asynchronous version of A2C (A3C).


Suggested


Week 5 - Advanced Policy Gradients - TRPO & PPO

This week is about advanced policy gradient methods that improve the stability and the convergence of the "Vanilla" policy gradient methods. You'll learn and implement PPO, a RL algorithm developed by OpenAI and adopted in OpenAI Five.


Theoretical material

Lectures

  • Advanced policy gradients - CS294-112 by Sergey Levine
    • Problems with "Vanilla" Policy Gradient Methods
    • Policy Performance Bounds
    • Monotonic Improvement Theory
    • Algorithms: NPO, TRPO, PPO
  • Natural Policy Gradients, TRPO, PPO - John Schulman, Berkey DRL Bootcamp - (RECAP, optional)
    • Limitations of "Vanilla" Policy Gradient Methods
    • Natural Policy Gradient
    • Trust Region Policy Optimization, TRPO
    • Proximal Policy Optimization, PPO

Papers


Project of the Week

This week, you have to implement PPO or TRPO. I suggest PPO given its simplicity (compared to TRPO). In the project folder Week5 you can find an implementation of PPO that learn to play BipedalWalker. Furthermore, in the folder you can find other resources that will help you in the development of the project. Have fun!

To learn more about PPO read the paper and take a look at the Arxiv Insights's video

NB: the hyperparameters of the PPO implementation I released, can be tuned to improve the convergence.


Suggested


Week 6 - Evolution Strategies and Genetic Algorithms

In the last year, Evolution strategies (ES) and Genetic Algorithms (GA) has been shown to achieve comparable results to RL methods. They are derivate-free black-box algorithms that require more data than RL to learn but are able to scale up across thousands of CPUs. This week we'll look at this black-box algorithms.


Material

Papers


Project of the Week

The project is to implement a ES or GA. In the Week6 repository you can find a basic implementation of the paper Evolution Strategies as a Scalable Alternative to Reinforcement Learning to solve LunarLanderContinuous. You can modify it to play more difficult environments or add your ideas.


Week 7 - Model Based reinforcement learning - I2A

Week 8 - AlphaGoZero + Bonus

Last 4 days - Review + sharing

Best RL papers

Best resources

๐Ÿ“บ Deep Reinforcement Learning - UC Berkeley class by Levine, check here their site.

๐Ÿ“บ Reinforcement Learning course - by David Silver, DeepMind. Great introductory lectures by Silver, a lead researcher on AlphaGo. They follow the book Reinforcement Learning by Sutton & Barto.

๐Ÿ““ Reinforcement Learning: An Introduction - by Sutton & Barto. The "Bible" of reinforcement learning. Here you can find the PDF draft of the second version.

Additional resources

๐Ÿ“š Awesome Reinforcement Learning. A curated list of resources dedicated to reinforcement learning

๐Ÿ“š GroundAI on RL. Papers on reinforcement learning

About

Learn Deep Reinforcement Learning in Depth in 60 days

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 60.6%
  • Python 39.4%