Skip to content

A Tensorflow based implementation of "Asynchronous Methods for Deep Reinforcement Learning": https://arxiv.org/abs/1602.01783

License

Notifications You must be signed in to change notification settings

steveKapturowski/async-deep-rl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This Repo is moving!

Henceforth new updates will be made here: https://github.com/steveKapturowski/tensorflow-rl The reason for the new repo and new name is that over time the scope of the code and variety of algorithms implemented has grown significantly such that 'async-deep-rl' no longer seems an accurate desciption of what this is.

Tensorflow-RL

Tensorflow based implementations of A3C, PGQ, TRPO, and CEM originally based on https://github.com/traai/async-deep-rl. I did some heavy refactoring and added several additional options including the a3c-lstm model, a fully-connected architecture to allow training on non-image-based gym environments, and support for the AdaMax optimizer.

There's also an implementation of the A3C+ model from Unifying Count-Based Exploration and Intrinsic Motivation but I'm still in the process of verifying that it can at least roughly reproduce the results of the paper on Montezuma's Revenge.

The code also includes some experimental ideas I'm toying with and I'm planning on adding the following implementations in the near future:

I've tested the implementations based on the A3C paper pretty extensively and some of my agent evaluations can be found at https://gym.openai.com/users/steveKapturowski. They should work but I can't guarantee I won't accidentally break something as I'm planning on doing a lot more refactoring.

I tried to match my PGQ implementation as closely as possible to what they describe in the paper but I've noticed the average episode reward can exhibit a pathological oscillatory behavior or suddenly collapse during training. If someone spots a flaw in my implementation I'd be extremely grateful to get your feedback. I've also applied PGQ to the A3C-LSTM architecture and experiments on simple environments show indications that this helps improve stability.

Running the code

First you'll need to install the cython extensions needed for the hog updates and CTS density model:

./setup.py install build_ext --inplace

To train an a3c agent on Pong run:

python main.py Pong-v0 --alg_type a3c -n 8

To evaluate a trained agent simply add the --test flag:

python main.py Pong-v0 --alg_type a3c -n 1 --test

Requirements

  • python 2.7
  • tensorflow 1.0
  • scikit-image
  • Cython
  • gym

About

A Tensorflow based implementation of "Asynchronous Methods for Deep Reinforcement Learning": https://arxiv.org/abs/1602.01783

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%