AI Flappy Bird Game Solved using Deep Q-Learning and Double Deep Q-Learning
- Flappy Bird Game is taken as reference to create the environment.
- Unnecessary graphics like wing movements is removed to make rendering and training faster.
- Background is replaced with black color to help the model converge faster due to more GPU computation.
A core difference between Deep Q-Learning and Vanilla Q-Learning is the implementation of the Q-table. Critically, Deep Q-Learning replaces the regular Q-table with a neural network. Rather than mapping a state-action pair to a q-value, a neural network maps input states to (action, Q-value) pairs.
Double Q-Learning implementation with Deep Neural Network is called Double Deep Q Network (Double DQN). Inspired by Double Q-Learning, Double DQN uses two different Deep Neural Networks, Deep Q Network (DQN) and Target Network.