Skip to content
/ dqn Public

Implementation of Deep Q-Networks in Pytorch

License

Notifications You must be signed in to change notification settings

mohitd/dqn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Q-Networks

Implementation of Deep Q-Networks in Pytorch. Along with the base implementation with the target network and experience replay, Dueling DQNs and double DQNs are also implemented.

Features

  • Base DQN (target network, frame skipping, experience replay)
  • Double DQN (DDQN)
  • Dueling DQN

Getting Started

Install the following prerequisites on your system

  • pytorch
  • torchvision
  • opencv
  • gym
  • gym[atari]

To execute a DQN, run the main.py file.

python main.py

All of the DQN training and optimizer parameters are at the top of main.py so feel free to modify these to suit your configuration.

There are some parameter configurations on the command line. More will be added!

Todos

  • Implement Prioritized Experience Replay
  • More command line configurations, e.g., enable/disable dueling DQN/DDQN, set number of timesteps, etc.
  • Train for a few days and post results

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments