Skip to content

Commit

Permalink
path
Browse files Browse the repository at this point in the history
  • Loading branch information
quantumiracle committed May 2, 2020
2 parents 5099665 + 3d31674 commit 25df8ab
Show file tree
Hide file tree
Showing 6 changed files with 88 additions and 37 deletions.
43 changes: 25 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
RLzoo is a collection of the most practical reinforcement learning algorithms, frameworks and applications. It is implemented with Tensorflow 2.0 and API of neural network layers in TensorLayer 2, to provide a hands-on fast-developing approach for reinforcement learning practices and benchmarks. It supports basic toy-tests like [OpenAI Gym](https://gym.openai.com/) and [DeepMind Control Suite](https://github.com/deepmind/dm_control) with very simple configurations. Moreover, RLzoo supports robot learning benchmark environment [RLBench](https://github.com/stepjam/RLBench) based on [Vrep](http://www.coppeliarobotics.com/)/[Pyrep](https://github.com/stepjam/PyRep) simulator. Other large-scale distributed training framework for more realistic scenarios with [Unity 3D](https://github.com/Unity-Technologies/ml-agents),
[Mujoco](http://www.mujoco.org/), [Bullet Physics](https://github.com/bulletphysics/bullet3), etc, will be supported in the future. A [Springer textbook](https://deepreinforcementlearningbook.org) is also provided, you can get the free PDF if your institute has Springer license.

Different from RLzoo for simple usage with **high-level APIs**, we also have a [RL tutorial](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning) that aims to make the reinforcement learning tutorial simple, transparent and straight-forward with **low-level APIs**, as this would not only benefits new learners of reinforcement learning, but also provide convenience for senior researchers to testify their new ideas quickly.

<!-- <em>Gym: Atari</em> <em>Gym: Box2D </em> <em>Gym: Classic Control </em> <em>Gym: MuJoCo </em>-->

<img src="https://github.com/tensorlayer/RLzoo/blob/master/gif/atari.gif" height=250 width=210 > <img src="https://github.com/tensorlayer/RLzoo/blob/master/gif/box2d.gif" height=250 width=210 > <img src="https://github.com/tensorlayer/RLzoo/blob/master/gif/classic.gif" height=250 width=210 > <img src="https://github.com/tensorlayer/RLzoo/blob/master/gif/mujoco.gif" height=250 width=210 >
Expand All @@ -25,9 +27,6 @@ RLzoo is a collection of the most practical reinforcement learning algorithms, f






We aim to make it easy to configure for all components within RL, including replacing the networks, optimizers, etc. We also provide automatically adaptive policies and value functions in the common functions: for the observation space, the vector state or the raw-pixel (image) state are supported automatically according to the shape of the space; for the action space, the discrete action or continuous action are supported automatically according to the shape of the space as well. The deterministic or stochastic property of policy needs to be chosen according to each algorithm. Some environments with raw-pixel based observation (e.g. Atari, RLBench) may be hard to train, be patient and play around with the hyperparameters!

**Table of contents:**
Expand All @@ -44,14 +43,13 @@ We aim to make it easy to configure for all components within RL, including repl
- [Credits](#credits)
- [Citing](#citing)

Please note that this repository using RL algorithms with **high-level API**. So if you want to get familiar with each algorithm more quickly, please look at our **[RL tutorials](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning)** where each algorithm is implemented individually in a more straightforward manner.

## Status: Release
We are currently open to any suggestions or pull requests from the community to make RLzoo a better repository. Given the scope of this project, we expect there could be some issues over
the coming months after initial release. We will keep improving the potential problems and commit when significant changes are made in the future. Current default hyperparameters for each algorithm and each environment may not be optimal, so you can play around with those hyperparameters to achieve best performances. We will release a version with optimal hyperparameters and benchmark results for all algorithms in the future.

## Contents:
### Algorithms:
## Contents
### Algorithms

| Algorithms | Papers |
| --------------- | -------|
Expand All @@ -76,7 +74,7 @@ the coming months after initial release. We will keep improving the potential pr
|Twin Delayed DDPG (TD3)|[Addressing function approximation error in actor-critic methods. Fujimoto et al. 2018.](https://arxiv.org/pdf/1802.09477.pdf)|
|Soft Actor-Critic (SAC)|[Soft actor-critic algorithms and applications. Haarnoja et al. 2018.](https://arxiv.org/abs/1812.05905)|

### Environments:
### Environments

* [**OpenAI Gym**](https://gym.openai.com/):

Expand Down Expand Up @@ -126,7 +124,7 @@ The supported configurations for RL algorithms with corresponding environments i
| TRPO | Discrete/Continuous | Stochastic | On-policy | All |


## Prerequisites:
## Prerequisites

* python >=3.5 (python 3.6 is needed if using dm_control)
* tensorflow >= 2.0.0 or tensorflow-gpu >= 2.0.0a0
Expand All @@ -136,15 +134,17 @@ The supported configurations for RL algorithms with corresponding environments i
* [Mujoco 2.0](http://www.mujoco.org/), [dm_control](https://github.com/deepmind/dm_control), [dm2gym](https://github.com/zuoxingdong/dm2gym) (if using DeepMind Control Suite environments)
* Vrep, PyRep, RLBench (if using RLBench environments, follows [here](http://www.coppeliarobotics.com/downloads.html), [here](https://github.com/stepjam/PyRep) and [here](https://github.com/stepjam/RLBench))

## Installation:
## Installation

To install RLzoo package with key requirements:

```
pip install rlzoo
```

## Usage:
## Usage

For usage, please check our [online documentation](https://rlzoo.readthedocs.io).

### 0. Quick Start
Choose whatever environments with whatever RL algorithms supported in RLzoo, and enjoy the game by running following example in the root file of installed package:
Expand Down Expand Up @@ -187,7 +187,6 @@ alg.learn(env=env, mode='train', render=False, **learn_params)
alg.learn(env=env, mode='test', render=True, **learn_params)
```

#### To Run:

```python
# in the root folder of rlzoo package
Expand All @@ -199,7 +198,7 @@ python run_rlzoo.py

RLzoo with **explicit configurations** means the configurations for learning, including parameter values for the algorithm and the learning process, the network structures used in the algorithms and the optimizers etc, are explicitly displayed in the main script for running. And the main scripts for demonstration are under the folder of each algorithm, for example, `./rlzoo/algorithms/sac/run_sac.py` can be called with `python algorithms/sac/run_sac.py` from the file `./rlzoo` to run the learning process same as in above implicit configurations.

#### A Quick Example:
#### A Quick Example

```python
import gym
Expand Down Expand Up @@ -264,8 +263,6 @@ render: if true, visualize the environment
model.learn(env, test_episodes=100, max_steps=200, mode='test', render=True)
```

#### To Run:

In the package folder, we provides examples with explicit configurations for each algorithm.

```python
Expand All @@ -276,23 +273,23 @@ python algorithms/<ALGORITHM_NAME>/run_<ALGORITHM_NAME>.py
python algorithms/ac/run_ac.py
```

## Troubleshooting:
## Troubleshooting

* If you meet the error *'AttributeError: module 'tensorflow' has no attribute 'contrib''* when running the code after installing tensorflow-probability, try:
`pip install --upgrade tf-nightly-2.0-preview tfp-nightly`
* When trying to use RLBench environments, *'No module named rlbench'* can be caused by no RLBench package installed at your local or a mistake in the python path. You should add `export PYTHONPATH=/home/quantumiracle/research/vrep/PyRep/RLBench` every time you try to run the learning script with RLBench environment or add it to you `~/.bashrc` file once for all.
* If you meet the error that the Qt platform is not loaded correctly when using DeepMind Control Suite environments, it's probably caused by your Ubuntu system not being version 14.04 or 16.04. Check [here](https://github.com/deepmind/dm_control).

## Credits:
Our contributors include:
## Credits
Our core contributors include:

[Zihan Ding](https://github.com/quantumiracle?tab=repositories),
[Tianyang Yu](https://github.com/Tokarev-TT-33),
[Yanhua Huang](https://github.com/Officium),
[Hongming Zhang](https://github.com/initial-h),
[Hao Dong](https://github.com/zsdonghao)

## Citing:
## Citing

```
@misc{RLzoo,
Expand All @@ -305,6 +302,16 @@ Our contributors include:
}
```

## Other Resources
<br/>
<a href="https://deepreinforcementlearningbook.org" target="\_blank">
<div align="center">
<img src="http://deep-reinforcement-learning-book.github.io/assets/images/cover_v1.png" width="20%"/>
</div>
<!-- <div align="center"><caption>Slack Invitation Link</caption></div> -->
</a>
<br/>

<br/>
<a href="https://deepreinforcementlearningbook.org" target="\_blank">
<div align="center">
Expand Down
1 change: 0 additions & 1 deletion docs/guide/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,5 @@ Open ``./run_rlzoo.py``:
Run the example:

.. code-block:: bash
:linenos:
python run_rlzoo.py
19 changes: 3 additions & 16 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
Reinforcement Learning Zoo for Simple Usage
============================================



.. image:: img/logo.png
:width: 50 %
:align: center
Expand Down Expand Up @@ -44,11 +42,11 @@ RLzoo is a collection of the most practical reinforcement learning algorithms, f
common/common

.. toctree::
:maxdepth: 2
:maxdepth: 1
:caption: Other Resources

other/drlbook

other/drl_book
other/drl_tutorial

Contributing
==================
Expand All @@ -63,17 +61,6 @@ Citation
* :ref:`search`


Other Resources
==================


.. image:: http://deep-reinforcement-learning-book.github.io/assets/images/cover_v1.png
:width: 30 %
:target: https://deepreinforcementlearningbook.org
.. image:: http://download.broadview.com.cn/ScreenShow/180371146440fada4ad2
:width: 30 %
:target: http://www.broadview.com.cn/book/5059

.. image:: img/logo.png
:width: 70 %
:align: center
Expand Down
42 changes: 42 additions & 0 deletions docs/other/drl_book.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
DRL Book
==========

.. image:: http://deep-reinforcement-learning-book.github.io/assets/images/cover_v1.png
:width: 30 %
:align: center
:target: https://deepreinforcementlearningbook.org

- You can get the `free PDF <https://deepreinforcementlearningbook.org>`__ if your institute has Springer license.

Deep reinforcement learning (DRL) relies on the intersection of reinforcement learning (RL) and deep learning (DL). It has been able to solve a wide range of complex decision-making tasks that were previously out of reach for a machine and famously contributed to the success of AlphaGo. Furthermore, it opens up numerous new applications in domains such as healthcare, robotics, smart grids, and finance.

Divided into three main parts, this book provides a comprehensive and self-contained introduction to DRL. The first part introduces the foundations of DL, RL and widely used DRL methods and discusses their implementation. The second part covers selected DRL research topics, which are useful for those wanting to specialize in DRL research. To help readers gain a deep understanding of DRL and quickly apply the techniques in practice, the third part presents mass applications, such as the intelligent transportation system and learning to run, with detailed explanations.

The book is intended for computer science students, both undergraduate and postgraduate, who would like to learn DRL from scratch, practice its implementation, and explore the research topics. This book also appeals to engineers and practitioners who do not have strong machine learning background, but want to quickly understand how DRL works and use the techniques in their applications.

Editors
--------
- Hao Dong - Peking University
- Zihan Ding - Princeton University
- Shanghang Zhang - University of California, Berkeley

Authors
--------
- Hao Dong - Peking University
- Zihan Ding - Princeton University
- Shanghang Zhang - University of California, Berkeley
- Hang Yuan - Oxford University
- Hongming Zhang - Peking University
- Jingqing Zhang - Imperial College London
- Yanhua Huang - Xiaohongshu Technology Co.
- Tianyang Yu - Nanchang University
- Huaqing Zhang - Google
- Ruitong Huang - Borealis AI


.. image:: https://deep-generative-models.github.io/files/web/water-bottom-min.png
:width: 100 %
:align: center
:target: https://github.com/tensorlayer/tensorlayer/edit/master/examples/reinforcement_learning


18 changes: 18 additions & 0 deletions docs/other/drl_tutorial.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
DRL Tutorial
=================================


.. image:: https://tensorlayer.readthedocs.io/en/latest/_images/tl_transparent_logo.png
:width: 30 %
:align: center
:target: https://github.com/tensorlayer/tensorlayer/edit/master/examples/reinforcement_learning


Different from RLzoo for simple usage with **high-level APIs**, the `RL tutorial <https://github.com/tensorlayer/tensorlayer/edit/master/examples/reinforcement_learning>`__ aims to make the reinforcement learning tutorial simple, transparent and straight-forward with **low-level APIs**, as this would not only benefits new learners of reinforcement learning, but also provide convenience for senior researchers to testify their new ideas quickly.

.. image:: https://deep-generative-models.github.io/files/web/water-bottom-min.png
:width: 100 %
:align: center
:target: https://github.com/tensorlayer/tensorlayer/edit/master/examples/reinforcement_learning


2 changes: 0 additions & 2 deletions docs/other/drlbook.rst

This file was deleted.

0 comments on commit 25df8ab

Please sign in to comment.