Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Generic reinforcement learning codebase in TensorFlow #1524

Closed
36 tasks done
whedon opened this issue Jun 24, 2019 · 112 comments
Closed
36 tasks done

[REVIEW]: Generic reinforcement learning codebase in TensorFlow #1524

whedon opened this issue Jun 24, 2019 · 112 comments
Assignees
Labels
accepted published recommend-accept review

Comments

@whedon
Copy link

@whedon whedon commented Jun 24, 2019

Submitting author: @alexanderimanicowenrivers (Alexander I. Cowen-Rivers)
Repository: https://github.com/for-ai/rl
Version: 1.0.0
Editor: @mbobra
Reviewer: @desilinguist, @paragkulkarni11
Archive: 10.5281/zenodo.3408453

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/a65da0f74f34be097b1c1189ae6abdc6"><img src="http://joss.theoj.org/papers/a65da0f74f34be097b1c1189ae6abdc6/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/a65da0f74f34be097b1c1189ae6abdc6/status.svg)](http://joss.theoj.org/papers/a65da0f74f34be097b1c1189ae6abdc6)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@desilinguist & @paragkulkarni11, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mbobra know.

Please try and complete your review in the next two weeks

Review checklist for @desilinguist

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: 1.0.0
  • Authorship: Has the submitting author (@alexanderimanicowenrivers) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?

Review checklist for @paragkulkarni11

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: 1.0.0
  • Authorship: Has the submitting author (@alexanderimanicowenrivers) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
@whedon
Copy link
Author

@whedon whedon commented Jun 24, 2019

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @desilinguist, @paragkulkarni11 it looks like you're currently assigned to review this paper 🎉.

Important

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

@whedon
Copy link
Author

@whedon whedon commented Jun 24, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

@whedon whedon commented Jun 24, 2019

@mbobra
Copy link
Member

@mbobra mbobra commented Jun 24, 2019

@desilinguist @paragkulkarni11 Thank you for agreeing to review this submission. Whedon generated a checklist for you above. Please let me know if you have any questions or comments!

@mbobra
Copy link
Member

@mbobra mbobra commented Jun 24, 2019

@alexanderimanicowenrivers Thank you for helping me find reviewers. I really appreciate it. Do you mind addressing the comments by @cervere (in #1502) in the manuscript (doesn't have to be long)?

@alexanderimanicowenrivers
Copy link

@alexanderimanicowenrivers alexanderimanicowenrivers commented Jun 24, 2019

Thanks for agreeing to review, and I have already addressed these comments @mbobra in my reply here #1502 (comment)

@mbobra
Copy link
Member

@mbobra mbobra commented Jul 2, 2019

👋 @desilinguist, @paragkulkarni11 How is it going? Would you like more time to review? Do you have any questions? Please let me know!

@alexanderimanicowenrivers

Let me know if there is anything you would like clarified! @desilinguist @paragkulkarni11

@desilinguist
Copy link

@desilinguist desilinguist commented Jul 2, 2019

Yes, I’d like until next Friday (7/12) please.

@mbobra
Copy link
Member

@mbobra mbobra commented Jul 2, 2019

@desilinguist Absolutely. Thanks for your time.

@alexanderimanicowenrivers

Thank you :)

@alexanderimanicowenrivers

Looking forward to receiving the reviews :) @desilinguist and @paragkulkarni11

@paragkulkarni11
Copy link

@paragkulkarni11 paragkulkarni11 commented Jul 12, 2019

Hi,
I am sorry for late review. In my opinion, documentation could be more elaborate adhering to guidelines given above. In particular , I will expect to be more specific when we say that past work was more general in nature. This will clearly specify and support your claim. Also can we see any real life example of this codebase implementation ? Kindly include it in your documentation. In case,, if possible please add most frequently used words of your paper. I am interested to see references of the paper used for this research. Please let me know if any point is not clear from above discussion. Thanks.

@desilinguist
Copy link

@desilinguist desilinguist commented Jul 14, 2019

I have completed my review and I think while the goal of the project is admirable – providing a generic codebase for reinforcement learning in Tensorflow that also provides nice logging and visualization features – the setup process and the overall documentation both leave a lot to be desired. I have filed two issues with suggestions on how to address both of these.

The software paper does have a bit of motivation but it isn't really clear what the target demographic is - is it RL researchers who are already experts and this is a way for them to run experiments or are novice users also included? I got the impression from my interaction with the authors that it was the latter but I was quite lost.

There also doesn't seem to be any real tests - the Travis CI script just runs a bunch of python train.py commands with a bunch of parameters to see whether anything would fail? Am I misreading that?

@mbobra
Copy link
Member

@mbobra mbobra commented Jul 15, 2019

👋 @alexanderimanicowenrivers Can you please address the comments by @paragkulkarni11 and @desilinguist? Please let me know if you want any help with this.

@alexanderimanicowenrivers
Copy link

@alexanderimanicowenrivers alexanderimanicowenrivers commented Jul 20, 2019

Thanks for the reviews @desilinguist and @paragkulkarni11

@desilinguist - We have resolved and one issue -- an improved initialisation script. We are indeed targetting the RL researcher, rather than a novice. We are finishing up the providing clearer documentation with working examples task (as is also shown in our newer paper.md file). The idea is for this codebase to be used specifically for RL research, rather than for industrial systems (although it could be used for this too)!.

@paragkulkarni11 - Thanks for the feedback, as mentioned above we have addressed the issues of working example and clearer documentation. We have been using it for research, however, we haven’t published anything yet.

The reason our tests are just run scripts is due to the high stochasticity of experiments. However, we are working on developing deterministic settings which we can rigorously test. We are awaiting adding in a few more tests on the non-stochastic components. As well as finishing up the docs.

Thanks!

Alex and the For.ai team :)

@mbobra
Copy link
Member

@mbobra mbobra commented Jul 24, 2019

@desilinguist @paragkulkarni11 Do you recommend this paper for publication after @alexanderimanicowenrivers' improvements or are there still some outstanding issues? I see that the checklists are not complete and I would like some guidance on how you would like to proceed from here.

@desilinguist
Copy link

@desilinguist desilinguist commented Jul 24, 2019

Thanks @alexanderimanicowenrivers for the modifications! Unfortunately, I am traveling to a conference outside the country today and won't be back until August 5th. I will be happy to re-review once I am back.

@alexanderimanicowenrivers
Copy link

@alexanderimanicowenrivers alexanderimanicowenrivers commented Aug 1, 2019

Dear reviewer,

We thank you for your comments and suggestions and believe we have sufficiently addressed all points.

Regarding your comment on the installation script, we have added to the file setup.sh such that the installation script now supports Homebrews on macOS. In addition, we have written additional logging lines that will allow the user to know exactly what is being installed and any errors if they occur. Furthermore, we have added to the file README.md additional information on how the user can manually install all dependencies should they choose to do so.

Regarding your comment on additional documentation, we have added to the files README.md and paper.md. Furthermore, we have created a readthedocs documentation paper, located at: https://rl-codebase.readthedocs.io/en/latest/. Here, we detail the different modules provided as well as a tutorial and basic usage. In the tutorial, we have outlined the use of a Conda environment to run experiments, as you requested us to do.

Regarding your comment on tests, we have under the directory \tests provided testing for memory and models.

Thanks for the great feedback.
Sincerely,
the FOR.ai team

@alexanderimanicowenrivers

I hope you had a nice holiday @desilinguist , I look forward to hearing back from you and @paragkulkarni11

@desilinguist
Copy link

@desilinguist desilinguist commented Aug 5, 2019

@alexanderimanicowenrivers thanks so much for considering my suggestions! I think there has been great progress. However, I still see a couple of issues:

  1. I don't see the changes you described in the README in setup.sh. For example, I don't see the mac_package_manaager variable anywhere in setup.sh in the master branch?

  2. I don't see the external documentation linked anywhere from the repository README? It'd be nice to add a readthedocs badge from shields.io right at the top of the README next to the passing builds.

  3. Thanks for adding more explicit tests. However, I see lots of deprecation warnings in the test output. Perhaps you can address those in the codebase or if you don't think they apply, you can turn off the deprecation warnings?

@alexanderimanicowenrivers

Dear @desilinguist,

Apologies, below are the final amendments. We are just working out how to add the badge from shields.io as we speak.

  1. See for-ai/rl#18
  2. See for-ai/rl#17
  3. See for-ai/rl#19

Let me know what you think.

Bests,
Alex

@desilinguist
Copy link

@desilinguist desilinguist commented Aug 6, 2019

@alexanderimanicowenrivers I think it would be better to review the changes all together after they are merged into the master branch.

In any case, I looked at the first PR for setup.sh and it looks like it hasn't really been tested with macports. I get the following error when running the command port install qt open-mpi pkg-config ffmpeg:

Error: Port qt not found

@alexanderimanicowenrivers

The major changes have now been merged, as well as the bug for macports has now been fixed in this PR - for-ai/rl#27

Thanks @desilinguist

Alex

@desilinguist
Copy link

@desilinguist desilinguist commented Aug 12, 2019

Thanks @alexanderimanicowenrivers! It looks much, much better now. FYI, the echo command in the PR doesn't match the actual command that's run so you should probably fix that :)

@mbobra please consider this approved from my side!

@mbobra
Copy link
Member

@mbobra mbobra commented Sep 30, 2019

@openjournals/joss-eics This paper is ready for acceptance! Nice work @hsezhiyan @alexanderimanicowenrivers @bryanlimy 🎉

@kyleniemeyer
Copy link

@kyleniemeyer kyleniemeyer commented Oct 1, 2019

@whedon generate pdf

@whedon
Copy link
Author

@whedon whedon commented Oct 1, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

@whedon whedon commented Oct 1, 2019

@kyleniemeyer
Copy link

@kyleniemeyer kyleniemeyer commented Oct 1, 2019

Hi @alexanderimanicowenrivers, some fixes needed in the paper before we publish:

  • in the abstract, you have some undefined acronyms
  • in related work, you need some spaces between words and in-text citations (actually, I see this in a few places throughout the paper)
  • The in-text citation associated with "stable baselines" is not showing up correctly
  • on page 3, the citation following "Proximal Policy Optimization" is not showing up
  • the code/docs snippet related to train.py on page 3 has text that is cut off, so you should probably move those to new lines
  • Similarly, the code snippet right above "Conclusions" has some text cut off
  • The Abadi 2016 reference has some mistakes in the title ({ } that are showing up in the text)
  • I don't think the author field is correct for the Keras-rl reference
  • TF-agents reference is missing a year, and needs some other information—I can't tell what kind of reference this is. If software, it needs at minimum a URL
  • The Pytorch reference is also missing some information (e.g., URL) and doesn't match the other software citations
  • Silver 2014 reference: I assume this is a conference ref—if so, it is missing information.
  • The Sutton 2000 reference should be:
@incollection{Sutton2000,
title = {Policy Gradient Methods for Reinforcement Learning with Function Approximation},
author = {Sutton, Richard S and David A. McAllester and Satinder P. Singh and Mansour, Yishay},
booktitle = {{Advances in Neural Information Processing Systems 12}},
editor = {S. A. Solla and T. K. Leen and K. M\"{u}ller},
pages = {1057--1063},
year = {2000},
publisher = {MIT Press},
}

@hsezhiyan
Copy link

@hsezhiyan hsezhiyan commented Oct 4, 2019

Hi Kyle,

Thanks for you update! I apologize for the delayed response.

I've addressed your comments in PR#32

I've fixed all issues except the Silver 2014 reference. I checked other papers on arxiv, and they cite they cite it the exact same way. Let me know if you have other concerns.

@kyleniemeyer
Copy link

@kyleniemeyer kyleniemeyer commented Oct 4, 2019

@whedon generate pdf

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

@kyleniemeyer
Copy link

@kyleniemeyer kyleniemeyer commented Oct 4, 2019

Hi @hsezhiyan, thanks for making those edits. I just submitted a PR that fixes two final things in the references: for-ai/rl#33

  • the Sutton 2000 reference was still missing some information; it looks like you added the BiBTeX citation I gave above, but your paper was referencing the original (incorrect) entry.
  • For the Silver 2014 reference, it looks like the other papers on arXiv are not citing in correctly (and as a journal, we need to be a bit more rigorous with references); I found the paper at http://proceedings.mlr.press/v32/silver14.html, where they give the correct BibTeX citation info.

@hsezhiyan
Copy link

@hsezhiyan hsezhiyan commented Oct 4, 2019

@kyleniemeyer thanks for the review and the pull request! I have merged your PR.

Are we good to go?

@kyleniemeyer
Copy link

@kyleniemeyer kyleniemeyer commented Oct 4, 2019

@whedon generate pdf

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

@kyleniemeyer
Copy link

@kyleniemeyer kyleniemeyer commented Oct 4, 2019

@whedon set 10.5281/zenodo.3408453 as archive

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

OK. 10.5281/zenodo.3408453 is the archive.

@kyleniemeyer
Copy link

@kyleniemeyer kyleniemeyer commented Oct 4, 2019

@whedon accept

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019


OK DOIs

- 10.5281/zenodo.1134899 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

Check final proof 👉 openjournals/joss-papers#1006

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1006, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@kyleniemeyer
Copy link

@kyleniemeyer kyleniemeyer commented Oct 4, 2019

@whedon accept deposit=true

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

Doing it live! Attempting automated processing of paper acceptance...

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 openjournals/joss-papers#1007
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01524
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? notify your editorial technical team...

@kyleniemeyer
Copy link

@kyleniemeyer kyleniemeyer commented Oct 4, 2019

Congrats @hsezhiyan on your article's publication in JOSS!

Many thanks to @desilinguist & @paragkulkarni11 for reviewing, and @mbobra for editing, this submission.

@whedon
Copy link
Author

@whedon whedon commented Oct 4, 2019

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01524/status.svg)](https://doi.org/10.21105/joss.01524)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01524">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01524/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01524/status.svg
   :target: https://doi.org/10.21105/joss.01524

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@hsezhiyan
Copy link

@hsezhiyan hsezhiyan commented Oct 5, 2019

Thank you very much @kyleniemeyer for your close collaboration at the last steps!

Thank you @mbobra for your guidance! And thanks @desilinguist and @paragkulkarni11 for your reviews!

@whedon whedon added published recommend-accept labels Mar 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published recommend-accept review
Projects
None yet
Development

No branches or pull requests

8 participants