Python Shell
Switch branches/tags
Nothing to show
Clone or download
VinF Adding options in the MG_two_storage experiment & a few minor changes
* minor fix docs

* adding a few comments

* first commit integr_learn_and_plan

* 1st version integr learn and plan

* fixing non trainable disc in full models

* fix lr and minor modif

* first prototype

* adding plot + disentgling t and a + pred R + first draft planning best action

* add R visualisation and fix qValues_planning

* add visualisation of planning

* clean

* avoiding memory leaks

* minor modif plots

* new way of training the transition model

* minor fixes

* clean

* fix for stability

* working version

* some modifs

* working

* fixing NN for all types of inputs and switching to an entropy + L2 reg for abstract state space

* fix

* bias overfit experiment with MG

* rm lp files
Latest commit 2ec59ba Jun 19, 2018

README.rst

Travis Python27 Python35 PyPi License

DeeR

DeeR is a python library for Deep Reinforcement. It is build with modularity in mind so that it can easily be adapted to any need. It provides many possibilities out of the box (prioritized experience replay, double Q-learning, DDPG, etc). Many different environment examples are also provided (some of them using OpenAI gym).

Dependencies

This framework is tested to work under Python 2.7, and Python 3.5. It should also work with Python 3.3 and 3.4.

The required dependencies are NumPy >= 1.10, joblib >= 0.9. You also need theano >= 0.8 or tensorflow >= 0.9 along with the keras library.

For running the examples, Matplotlib >= 1.1.1 is required. For running the atari games environment, you need to install ALE >= 0.4.

Full Documentation

The documentation is available at : http://deer.readthedocs.io/

Here are a few examples :

http://vincent.francois-l.be/img_GeneralDeepQRL/seaquest.gif

http://vincent.francois-l.be/img_GeneralDeepQRL/output7.gif