Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JOSS Review: paper feedback #260

Closed
draabe opened this issue May 29, 2024 · 3 comments · Fixed by #261
Closed

JOSS Review: paper feedback #260

draabe opened this issue May 29, 2024 · 3 comments · Fixed by #261

Comments

@draabe
Copy link

draabe commented May 29, 2024

Hi,

here are my revision points for the software paper.

Summary

I think that, overall, the paper looks really great. It is well-written, concise, and easy-to-follow but instructive even for a non-domain-informed audience. There are a few points (see below) that could use some slight improvements/additions, but I feel the paper is close to checking the JOSS boxes.

Major points

  • Summary: I think the summary should be extended slightly to discuss a bit more the high-level functionality of your package. The purpose becomes clear, but what exactly can I do as a user, i.e., what are the main functions? What environments can I create, what stochastic simulations can I run, do you provide policies for re-inforcement learning or do I have to provide my own? Ideally, I'd know if your package will help and fit within my research project or product after reading your Summary section. You could also think about summarizing the functionality in a dedicated Functionality section.
  • State of the field: Although I acknowledge that you mention other common approaches to SAR planning (particularly path planning), I still feel the state of the field could be described a bit more verbosely. What are the limitations of previous approaches, and how do approaches based on reinforcement learning address these limitations or improve the finding of search strategies? How is a SAR path planning problem typically defined, and how (i.e., with which metrics) is a solution evaluated?
  • State of the field (second point): Could you also briefly discuss other solutions (even if not publicly available or proprietary) as far as possible, and compare your package to them (e.g., in terms of functionality and availability).
  • One last note on the structure of the paper: Anything after the second paragraph of the Statement of Need section (line 28 and below) does not, in my opinion, really contribute further to the statement of need (which is fully and convincingly stated in the first two paragraphs). So it could be converted into a separate section (something like Package Description or else) for clarity. But I believe this is more a matter of personal preference, so I'll leave the decision to you.

Minor

10: the second sentence of the summary is quite long and confusing. maybe split into multiple sentences for simplification.
11: equip's -> equips
25: researches propose -> research proposes
28: researches -> research

Figures

  • There seems to be an issue with the compilation of Figure 1

References

  • the link for the reference "International aeronautical and maritime search and rescue manual - volume II - mission84
    co-ordination: Vol. II (9th ed.). (2022). (9th ed.) " does not compile nicely, but I'm also not sure how to fix it. Maybe the manual has an ISBN instead?
  • Terry, Black, ... (2021) is missing a DOI or URL
@renatex333
Copy link
Collaborator

renatex333 commented Jun 3, 2024

Hi!

Thank you very much for your detailed and constructive feedback. I have addressed all the points you raised and have made the necessary revisions. Below is a summary of the changes made:

Major Points:

  • Summary: I have extended the summary to provide more details on the high-level functionality of the package. This includes the environments they can create and the availability for training reinforcement learning policies.
    Additionally, regarding your question about whether to answer if the package provides policies for reinforcement learning in the paper, I have added a brief mention in the summary section for completeness, even though it is detailed in the official documentation under the "Algorithms" section.

  • State of the Field: I have expanded the discussion on the limitations of previous approaches to SAR planning and how reinforcement learning addresses these limitations. Additionally, I have provided more information about the metrics used to evaluate SAR path planning solutions. Finally, I have briefly discussed other solutions present on literature and compared our package to them in terms of functionality and availability.

  • Structure: I have moved the content after the second paragraph of the Statement of Need section to a new section titled "Functionaluty" for better clarity.

  • Implemented RL Algorithms: In the final paragraph, I included a reference to our repository, which contains numerous algorithms implemented by our team using the DSSE package. For detailed descriptions of the implemented algorithms, please refer to the official documentation.

Figures:

  • Addressed the compilation issue with Figure 1.

References:

  • Fixed the link for the "International Aeronautical and Maritime Search and Rescue Manual" reference and added an ISBN.

  • Added the missing URL for the Terry, Black, ... (2021) reference.

You can find all the modifications in this Pull Request. Please review the changes and let me know if there are any further adjustments needed.

Thank you again for your valuable feedback.

@draabe
Copy link
Author

draabe commented Jun 10, 2024

Thank you, this reads like promising improvements to the paper. Could you kindly update the branch from which the paper is compiled so the changes can be seen on the pdf?

@renatex333
Copy link
Collaborator

Hi, @draabe!

The paper is compiled from both branches "main" and "paper-writing". I believe the last draft of the paper is available at the Draft PDF workflow run with id 55.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants