Skip to content

Commit

Permalink
Added Slime Volleyball Gym Environment to Projects (#887)
Browse files Browse the repository at this point in the history
* Added Slime Volleyball Gym Environment to Projects

* Update changelog.rst

* Update changelog.rst

Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
  • Loading branch information
hardmaru and araffin committed Jun 11, 2020
1 parent 9f17306 commit 0c956eb
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 1 deletion.
3 changes: 2 additions & 1 deletion docs/misc/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ Documentation:
- Fixed ``train_mountaincar`` description
- Added imitation baselines project
- Updated install instructions
- Added Slime Volleyball project (@hardmaru)


Release 2.10.0 (2020-03-11)
Expand Down Expand Up @@ -710,4 +711,4 @@ Thanks to @bjmuld @iambenzo @iandanforth @r7vme @brendenpetersen @huvar @abhiskk
@Miffyli @dwiel @miguelrass @qxcv @jaberkow @eavelardev @ruifeng96150 @pedrohbtp @srivatsankrishnan @evilsocket
@MarvineGothic @jdossgollin @SyllogismRXS @rusu24edward @jbulow @Antymon @seheevic @justinkterry @edbeeching
@flodorner @KuKuXia @NeoExtended @solliet @mmcenta @richardwu @tirafesi @caburu @johannes-dornheim @kvenkman @aakash94
@enderdead
@enderdead @hardmaru
10 changes: 10 additions & 0 deletions docs/misc/projects.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,16 @@ This is a list of projects using stable-baselines.
Please tell us, if you want your project to appear on this page ;)


Slime Volleyball Gym Environment
--------------------------------
A simple environment for benchmarking single and multi-agent reinforcement learning algorithms on a clone of the Slime Volleyball game. Only dependencies are gym and numpy. Both state and pixel observation environments are available. The motivation of this environment is to easily enable trained agents to play against each other, and also facilitate the training of agents directly in a multi-agent setting, thus adding an extra dimension for evaluating an agent's performance.

Uses stable-baselines to train RL agents for both state and pixel observation versions of the task. A tutorial is also provided on modifying stable-baselines for self-play using PPO.

| Author: David Ha (@hardmaru)
| Github repo: https://github.com/hardmaru/slimevolleygym

Learning to drive in a day
--------------------------
Implementation of reinforcement learning approach to make a donkey car learn to drive.
Expand Down

0 comments on commit 0c956eb

Please sign in to comment.