Skip to content

Commit

Permalink
Fixed typo (#3)
Browse files Browse the repository at this point in the history
* Fixed typo

* Update changelog.rst

Co-authored-by: Rouslan Placella <rouslan@placella.com>
  • Loading branch information
araffin and roccivic committed Mar 19, 2021
1 parent 9056f6a commit b83a434
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
3 changes: 2 additions & 1 deletion docs/misc/changelog.rst
Expand Up @@ -95,6 +95,7 @@ Documentation:
- Added Slime Volleyball project (@hardmaru)
- Added a table of the variables accessible from the ``on_step`` function of the callbacks for each algorithm (@PartiallyTyped)
- Fix typo in README.md (@ColinLeongUDRI)
- Fix typo in gail.rst (@roccivic)

Release 2.10.0 (2020-03-11)
---------------------------
Expand Down Expand Up @@ -751,4 +752,4 @@ Thanks to @bjmuld @iambenzo @iandanforth @r7vme @brendenpetersen @huvar @abhiskk
@MarvineGothic @jdossgollin @SyllogismRXS @rusu24edward @jbulow @Antymon @seheevic @justinkterry @edbeeching
@flodorner @KuKuXia @NeoExtended @PartiallyTyped @mmcenta @richardwu @tirafesi @caburu @johannes-dornheim @kvenkman @aakash94
@enderdead @hardmaru @jbarsce @ColinLeongUDRI @shwang @YangRui2015 @sophiagu @OGordon100 @SVJayanthi @sunshineclt
@anj1
@roccivic @anj1
2 changes: 1 addition & 1 deletion docs/modules/gail.rst
Expand Up @@ -11,7 +11,7 @@ to recover a cost function and then learn a policy.

Learning a cost function from expert demonstrations is called Inverse Reinforcement Learning (IRL).
The connection between GAIL and Generative Adversarial Networks (GANs) is that it uses a discriminator that tries
to seperate expert trajectory from trajectories of the learned policy, which has the role of the generator here.
to separate expert trajectory from trajectories of the learned policy, which has the role of the generator here.

.. note::

Expand Down

0 comments on commit b83a434

Please sign in to comment.