Reinforcement learning algorithms and experiments in Python
This repository contains Python code that can be used in order to experiment with reinforcement learning in Python. The code is organized in several components that can be mix and matched. For instance, different kinds of RL algorithms (Q-Learning, advantage, etc) can be tested on a specific world or problem. An algorithm can also be configured to use one of the possible models (Q-Learning can store the Q values in a simple dictionary, or using different kinds of function approximation methods).
- AbstractWorld: Environment and behavior of an agent. The world defines the number of possible actions, and produces observations and rewards when actions are carried out.
- AbstractLearning: Observes states and rewards and choose actions to perform.
- AbstractModel: Stores and retrieve values. For instance, a model is used to associate Q values to (state, action) pairs. A model can be discrete or based on function approximation.
This project uses several machine-learning Python libraries. Most of them are optional, the program being able to run (with limited functionality) without them. Here is the list of dependencies, with instructions about how to install them.
- NumPy : Available on PyPi (
- Matplotlib : Available on PyPi (
- Theano (optional) : Available on PyPi (
- Keras (optional) : Available on PyPi (
- FANN2 (optional) : Available on PyPi (
- rlglue-py3 (optional) : https://github.com/steckdenis/rlglue-py3 . Python bindings for Python exist for some time but were never ported to Python3
- rlglue-py (optional) : Python 2 version of rl-glue, can be used if you run this code with Python 2
- rospy (optional) : Allows ROSWorld to be used. If your rospy is based on Python 2, then this project will also have to be run using Python 2.