Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: graphenv: a Python library for reinforcement learning on graph search spaces #4621

Closed
editorialbot opened this issue Jul 29, 2022 · 62 comments
Assignees
Labels
accepted Jupyter Notebook published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review Shell

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Jul 29, 2022

Submitting author: @pstjohn (Peter C. St. John)
Repository: https://github.com/NREL/graph-env/
Branch with paper.md (empty if default branch):
Version: v0.1.3
Editor: @osorensen
Reviewers: @iammix, @vwxyzjn, @Viech, @idoby
Archive: 10.5281/zenodo.7030161

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/1d1427f9c72a4a7fb9ac2a53506e0733"><img src="https://joss.theoj.org/papers/1d1427f9c72a4a7fb9ac2a53506e0733/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/1d1427f9c72a4a7fb9ac2a53506e0733/status.svg)](https://joss.theoj.org/papers/1d1427f9c72a4a7fb9ac2a53506e0733)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@iammix & @vwxyzjn & @Viech & @idoby, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @osorensen know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @iammix

📝 Checklist for @vwxyzjn

📝 Checklist for @idoby

📝 Checklist for @Viech

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.07 s (818.7 files/s, 128621.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          28            967           1577           2962
Markdown                         6            108              0            319
Jupyter Notebook                 7              0           2307            279
YAML                             6             36            115            216
TeX                              1             12              0            134
reStructuredText                 7             57            131             62
JSON                             1              0              0             58
DOS Batch                        1              8              1             26
Bourne Shell                     1              1              7             16
make                             2              5              7             15
-------------------------------------------------------------------------------
SUM:                            60           1194           4145           4087
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 1026

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.patter.2021.100361 is OK
- 10.1039/d1sc02770k is OK
- 10.1038/s41597-020-00588-x is OK
- 10.1038/s41467-020-16201-z is OK
- 10.1007/978-3-030-50426-7_33 is OK
- 10.1038/s41598-019-47148-x is OK

MISSING DOIs

- None

INVALID DOIs

- 10.1038/s42256-022-00506-3 is INVALID

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@idoby
Copy link

idoby commented Jul 29, 2022

Review checklist for @idoby

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/NREL/graph-env/?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@pstjohn) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@iammix
Copy link

iammix commented Jul 29, 2022

Review checklist for @iammix

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/NREL/graph-env/?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@pstjohn) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@idoby
Copy link

idoby commented Jul 29, 2022

This seems like a very interesting package! I will have the time to dive in more in-depth soon, but in the meanwhile, can the authors fix the citation on line 17 of the paper to be consistent with the other citations, and resolve the invalid citation issue?

Also, please clarify the contributions of the two extra authors who are not listed as collaborators on the GitHub repo.

Thanks!

@pstjohn
Copy link

pstjohn commented Jul 30, 2022

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Hello @idoby, here are the things you can ask me to do:


# List all available commands
@editorialbot commands

# Get a list of all editors's GitHub handles
@editorialbot list editors

# Check the references of the paper for missing DOIs
@editorialbot check references

# Perform checks on the repository
@editorialbot check repository

# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist

# Set a value for branch
@editorialbot set joss-paper as branch

# Generates the pdf paper
@editorialbot generate pdf

# Get a link to the complete list of reviewers
@editorialbot list reviewers

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.patter.2021.100361 is OK
- 10.1039/d1sc02770k is OK
- 10.1038/s41597-020-00588-x is OK
- 10.1038/s41467-020-16201-z is OK
- 10.1007/978-3-030-50426-7_33 is OK
- 10.1038/s41598-019-47148-x is OK

MISSING DOIs

- None

INVALID DOIs

- 10.1038/s42256-022-00506-3 is INVALID

@pstjohn
Copy link

pstjohn commented Jul 30, 2022

Apologies if these contribution edits should have been made in the manuscript?

That invalid DOI, 10.1038/s42256-022-00506-3, should be live starting 04 August 2022 at 11:00 (US Eastern Time)

@idoby
Copy link

idoby commented Jul 30, 2022

Apologies if these contribution edits should have been made in the manuscript?

I'm pretty new at reviewing for JOSS, but the documentation does not state this information has to be specified in the manuscript. So I think just your comment should be fine.

That invalid DOI, 10.1038/s42256-022-00506-3, should be live starting 04 August 2022 at 11:00 (US Eastern Time)

OK, LGTM then. Appreciate your work on the library and thank you for contributing it. :)

@idoby
Copy link

idoby commented Jul 30, 2022

@pstjohn Looking at it now, it seems like the merge commit for Dmitry's PR prevented him being listed as a contributor by GitHub. I'm not sure why, since it usually works fine.

@osorensen
Copy link
Member

Apologies if these contribution edits should have been made in the manuscript?

I'm pretty new at reviewing for JOSS, but the documentation does not state this information has to be specified in the manuscript. So I think just your comment should be fine.

That invalid DOI, 10.1038/s42256-022-00506-3, should be live starting 04 August 2022 at 11:00 (US Eastern Time)

OK, LGTM then. Appreciate your work on the library and thank you for contributing it. :)

Wanted to chime in as editor that I agree with you. Comments in this thread are sufficient.

@vwxyzjn
Copy link

vwxyzjn commented Jul 30, 2022

Review checklist for @vwxyzjn

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/NREL/graph-env/?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@pstjohn) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
    • Yes, but there are no locked versions. This means the software could break in the future when dependencies introduce breaking changes.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@pstjohn
Copy link

pstjohn commented Aug 4, 2022

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.patter.2021.100361 is OK
- 10.1039/d1sc02770k is OK
- 10.1038/s41597-020-00588-x is OK
- 10.1038/s41467-020-16201-z is OK
- 10.1038/s42256-022-00506-3 is OK
- 10.1007/978-3-030-50426-7_33 is OK
- 10.1038/s41598-019-47148-x is OK

MISSING DOIs

- None

INVALID DOIs

- None

@vwxyzjn
Copy link

vwxyzjn commented Aug 8, 2022

The Paper and repo look good! Opened a couple of issues:

NREL/graph-env#33
NREL/graph-env#34

Also, please consider filling out the github repo's description and keywords. See the screenshot below as an example.
Screen Shot 2022-08-08 at 12 30 39 PM

@Viech
Copy link

Viech commented Aug 10, 2022

Review checklist for @Viech

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/NREL/graph-env/?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@pstjohn) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@Viech
Copy link

Viech commented Aug 10, 2022

I have completed my review with some remarks below and will tick my remaining boxes when these points were addressed or discussed. Overall, I find that this is a small but nicely maintained and documented package whose scope could be described a bit better to a non-specialist audience.

General checks

Substantial scholarly effort

Functionally this is a very small library: Excluding the examples subfolder, there are significantly less than 1000 lines of code in the graphenv folder that appears to contain the library as such, so I should flag this to @osorensen. This code gives the impression of a joint wrapper around two existing libraries. On the other hand, a substantial effort is apparent in the presentation and integration of this code in the form of tests, examples, and deploymet scripts. I can also see that the functionality offered is useful and not accessible to users not familiar with the backend libraries used.

Functionality

Installation

  • I have opened #36.

Documentation

A statement of need

  • JOSS asks for the target audience to be mentioned in the documentation.

Installation instructions

  • The dependencies are handled by pip but it would be good to have them listed also in the documentation (for working with the source and for external package managers).
  • It appears that one can choose between tensorflow and torch, with one of them being required. This is not documented in the "Installation" section.

Manuscript

I find that the manuscript is well-written and concise but the distinction between graph search problems and other problem representations could be made more clear as this seems to be the selling point for the software.

Author list

  • If possible with the template, you should use inter-word spacing between "St." and "John".
  • If possible, the digit 2 and the pilcrow in the author affiliations would look better on the respective next line, before their associated text. (Also, I would expect some contact info for the corresponding author; was this missed or is it not intended?)

Summary

  • The concept of "strong relational inductive biases" is not made clear to a general audience.
  • It would be nice to get an intuition on why certain problems are graph search representable but do not have an algebraic form (beyond mentioning molecular optimization as an example) as this seems to be the main motivation to use your package over classical optimization techniques.
  • "or integer program" -> "or integer programs"?
  • I don't understand why you mention "with well defined linear objective function and linear constraints": If linearity is important here, you could just write "integer linear programs". Otherwise, mentioning linearity does not support your argument that graph search problems are more expressive than classical representations.
  • You use both a "traveling salesman problem" and "Traveling Salesman Problem" capitalization.
  • You write "RLLib" while the associated paper writes it as "RLlib" in the title.

Statement of need

  • JOSS asks for the target audience to be mentioned here.
  • In the RLLib paragraph, it is not clear how the concepts of parametrically defined and invalid actions, flattening, masking, and successor state relate to what we learned in the summary. If these are central concepts, you should consider introducing them briefly in the summary. Otherwise this paragraph might be too technical for a statement of need.

@osorensen
Copy link
Member

Thanks for your review @Viech!

Regarding scholarly scope, the guidelines state as some factors to consider, which I will consider point-by-point below, while trying to "think out loud".

  • Age of software (is this a well-established software project) / length of commit history.

Looking at the repository, I find that the software seems rather young, with the first commit made on February 11 2022.

  • Number of commits.

This is rather large, 241 in total.

  • Number of authors.

There have been four contributors, which suggests that this is a joint effort.

  • Total lines of code (LOC). Submissions under 1000 LOC will usually be flagged, those under 300 LOC will be desk rejected.

This one has already been commented on in @Viech's review.

  • Whether the software has already been cited in academic papers.

Could you please comment on this @pstjohn? Is the software used in academic paper, or are you aware of papers in preparation which will use the package.

  • Whether the software is sufficiently useful that it is likely to be cited by your peer group.

This is an important point. Making a joint wrapper around two existing libraries can in itself be very useful for users not familiar with those libraries.

In sum, I understand your concern about scholarly scope @Viech, and see that this is a difficult case. I would also be interested to hear what @iammix thinks of this, as the "Substantial scholarly effort" button has not been ticked off in your review checklist.

Also @Viech, could you think of any suggested extensions of the package that would make it more clearly within scope for JOSS?

You're of course also welcome to comment on this @pstjohn.

@idoby
Copy link

idoby commented Aug 10, 2022

@osorensen Just adding a comment here since I did check the respective checkbox. Following the discussion here I went back and checked the commit history in greater depth and found that a large number of commits are just one-liners or seem to be purely about cosmetics/experimenting with settings.

So I would suggest the authors

  • document these successful (or less successful) configurations and experiments to convince the reader that their library is indeed robust and stands for more than just integration between various libraries (which is great but personally not something I would read a journal to find out about), and
  • go back and clean up some of the history so that important development milestones stand out more clearly.

Personally, I believe git history to be more than just a storage space for old versions of the code, it also serves to document the design and development process for a project, document decisions made, etc.

(also, apologies, this is my first review and I didn't realize reviews could extend beyond the checklist)

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.patter.2021.100361 is OK
- 10.1039/d1sc02770k is OK
- 10.1038/s41597-020-00588-x is OK
- 10.1038/s41467-020-16201-z is OK
- 10.1038/s42256-022-00506-3 is OK
- 10.1007/978-3-030-50426-7_33 is OK
- 10.1038/s41598-019-47148-x is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@osorensen
Copy link
Member

@pstjohn, as you see I've open three small issues in the source repository. When you have addressed these, and the four points points about release and archiving mentioned in my comment yesterday, I'm ready to go forward with acceptance.

@pstjohn
Copy link

pstjohn commented Aug 28, 2022

Took a couple tries to get the .zenodo.json metadata in the right format, but I think I've made the requested paper edits and these associated tags:

  • Make a tagged release of your software, and list the version tag of the archived version here.
    v0.1.3
  • Archive the reviewed software in Zenodo or a similar service (e.g., figshare, an institutional repository)
  • Check the archival deposit (e.g., in Zenodo) has the correct metadata. This includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it). You may also add the authors' ORCID.
  • Please list the DOI of the archived version here.
    10.5281/zenodo.7030161

@pstjohn
Copy link

pstjohn commented Aug 28, 2022

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.48550/arXiv.1806.01261 is OK
- 10.11578/dc.20201221.3 is OK
- 10.48550/arXiv.1606.01540 is OK
- 10.48550/arXiv.1712.09381 is OK
- 10.1016/j.patter.2021.100361 is OK
- 10.1039/d1sc02770k is OK
- 10.1038/s41597-020-00588-x is OK
- 10.1038/s41467-020-16201-z is OK
- 10.1038/s42256-022-00506-3 is OK
- 10.1007/978-3-030-50426-7_33 is OK
- 10.48550/arXiv.2011.06069 is OK
- 10.1038/s41598-019-47148-x is OK

MISSING DOIs

- None

INVALID DOIs

- None

@pstjohn
Copy link

pstjohn commented Aug 28, 2022

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@osorensen
Copy link
Member

@editorialbot set 10.5281/zenodo.7030161 as archive

@editorialbot
Copy link
Collaborator Author

Done! Archive is now 10.5281/zenodo.7030161

@osorensen
Copy link
Member

@editorialbot set v0.1.3 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v0.1.3

@osorensen
Copy link
Member

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.48550/arXiv.1806.01261 is OK
- 10.11578/dc.20201221.3 is OK
- 10.48550/arXiv.1606.01540 is OK
- 10.48550/arXiv.1712.09381 is OK
- 10.1016/j.patter.2021.100361 is OK
- 10.1039/d1sc02770k is OK
- 10.1038/s41597-020-00588-x is OK
- 10.1038/s41467-020-16201-z is OK
- 10.1038/s42256-022-00506-3 is OK
- 10.1007/978-3-030-50426-7_33 is OK
- 10.48550/arXiv.2011.06069 is OK
- 10.1038/s41598-019-47148-x is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#3486, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Aug 29, 2022
@osorensen
Copy link
Member

Thanks for your very good and useful reviews @iammix, @vwxyzjn, @Viech, @idoby.

Thanks for responding so quickly @pstjohn, and congratulations with a very nice piece of software and a well written paper.

The editor in chief will take over from here.

@arfon
Copy link
Member

arfon commented Sep 5, 2022

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.04621 joss-papers#3505
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.04621
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Sep 5, 2022
@arfon
Copy link
Member

arfon commented Sep 5, 2022

@iammix, @vwxyzjn, @Viech, @idoby – many thanks for your reviews here and to @osorensen for editing this submission! JOSS relies upon the volunteer effort of people like you and we simply wouldn't be able to do this without you ✨

@pstjohn – your paper is now accepted and published in JOSS ⚡🚀💥

@arfon arfon closed this as completed Sep 5, 2022
@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.04621/status.svg)](https://doi.org/10.21105/joss.04621)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.04621">
  <img src="https://joss.theoj.org/papers/10.21105/joss.04621/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.04621/status.svg
   :target: https://doi.org/10.21105/joss.04621

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Jupyter Notebook published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review Shell
Projects
None yet
Development

No branches or pull requests

8 participants