Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JOSS: repository review #262

Closed
draabe opened this issue Jun 10, 2024 · 2 comments
Closed

JOSS: repository review #262

draabe opened this issue Jun 10, 2024 · 2 comments

Comments

@draabe
Copy link

draabe commented Jun 10, 2024

Hi @renatex333,

I've checked the repository for its functionality, and here are my points w.r.t to the revision. In general, installation and core functionality works fine on my machine, and I like the overall structure of the package. The documentation also covers the most important points. I'm having some difficulties running the algorithms with the package being split into two repositories. Here's a few things I noticed:

Major

  • I do understand the reasoning behind splitting your package into two repositories, but this makes things a bit harder to use/review. For example, I was trying to run your algorithms on the environment, but wasn't sure how to accomplish this from the documentation. The file paths from the "How to run" section seem outdated, so it's taking some time to find the appropriate files. Furthermore, a short demo on how to combine the DSSE search or coverage environment with the algorithms would be very helpful.
  • As a consequence from this, it's a bit opaque which functionality is part of which repository, as the JOSS paper "technically" only aims at the DSSE repo. Is everything from the DSSE-algorithms repo wrapped in DSSE? As above, this could be clarified with a short usage example.
  • One more point concerning documentation: I was wondering if your package has any other object than the env object for high- or low-level usage? This is difficult to see as there seems to be no dedicated API reference in your documentation. If everything is limited to the env object, I feel that the documentation in the Search Environment and Coverage Environment are sufficiently explanatory for usage, but maybe I'm missing some functions?

Minor

  • Installation works fine, but unless I'm missing something the GDAL dependency requires Microsoft Visual C++ 14.0 or greater for building. This could be mentioned in your documentation, as for some users this means it's not as easy as running pip install DSSE
  • I really appreciate your minimalistic interface design and choice of providing thorough Quick Start code and then building the explanation of functionality around it. The only thing I did not find sufficient information on (when I want to modify it) is the interface of the random_policy function - what does this one need to do in order to use it beyond the sample you provide?

Could you kindly help adressing these points? Let me know if anything is unclear.

@renatex333
Copy link
Collaborator

renatex333 commented Jun 11, 2024

Hi!

Thank you for your valuable feedback. Here are my responses to the points you've raised:

Major Points:

  1. Outdated "How to Run" Section:
    Apologies for the confusion regarding the outdated "How to run" section. The algorithms repository had some modifications to its structure to align with the documentation, but these changes had not been merged at the time of your review. Now, everything is updated and in sync. The scripts prefixed with "train" and "test" provide real demos on how to combine our project with the well-established Reinforcement Learning library, RLlib.

  2. Clarifying Functionality Between Repositories:
    Alongside developing the DSSE environment, we conducted research that resulted in the algorithms repository. While these algorithms are specifically designed to use our environment, the project submitted for review focuses solely on the environment itself. The DSSE-algorithms repository serves as a practical demonstration of the scientific and research applications for the software developed.

  3. Documentation and API Reference:
    There are no additional hidden functionalities beyond the env objects. The package is built on top of the PettingZoo interface, extending reinforcement learning applications to the domain of autonomous drone SAR missions.

Minor Points:

  1. GDAL Dependency:
    Thank you for highlighting the GDAL dependency issue. It indeed requires Microsoft Visual C++ 14.0 or greater for building. We will add this information to the documentation to ensure users are aware of this requirement.

  2. Random Policy Function:
    The "random_policy" function is designed to abstract the concept of a model or function that chooses actions within the environment's action space. After training a model with RLlib, you can use it to select an action with the following code:

    from ray.rllib.algorithms.ppo import PPO
    
    model = PPO.from_checkpoint(checkpoint_path)
    
    action = model.compute_single_action(observation)

    Here, checkpoint_path refers to the directory where RLlib saves the state of the model during training. This is essential for preserving progress in case of long training processes that could take many hours.
    I will ensure to include additional guidance on how to customize the "random_policy" function to meet specific needs.

Thank you again for your insightful feedback. If there are any further questions or clarifications needed, please let me know. I look forward to your continued review and any additional suggestions you might have.

@draabe
Copy link
Author

draabe commented Jun 30, 2024

Great, thanks!

@draabe draabe closed this as completed Jun 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants