Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: DICaugment: A Python Package for 3D Medical Imaging Augmentation #6120

Closed
editorialbot opened this issue Dec 4, 2023 · 49 comments
Closed
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Dec 4, 2023

Submitting author: @jjmcintosh (Jacob McIntosh)
Repository: https://github.com/DIDSR/DICaugment
Branch with paper.md (empty if default branch):
Version: v1.0.7
Editor: @osorensen
Reviewers: @MaierOli2010, @kuadrat
Archive: 10.5281/zenodo.10738855

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/213e7346d6494985084a47f3e6d5ae29"><img src="https://joss.theoj.org/papers/213e7346d6494985084a47f3e6d5ae29/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/213e7346d6494985084a47f3e6d5ae29/status.svg)](https://joss.theoj.org/papers/213e7346d6494985084a47f3e6d5ae29)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@MaierOli2010 & @kuadrat, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @osorensen know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @MaierOli2010

📝 Checklist for @kuadrat

@editorialbot editorialbot added Python review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning waitlisted Submissions in the JOSS backlog due to reduced service mode. labels Dec 4, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.17 s (522.8 files/s, 135040.0 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          57           4009           7687           9789
XML                              1              0              0            417
reStructuredText                16            308            143            390
Markdown                         2             73              0            167
TeX                              1              8              0            123
YAML                             2              7              0             45
DOS Batch                        1              8              1             26
CSS                              2              0              0             14
make                             1              4              7              9
JSON                             6              0              0              6
HTML                             1              0              0              4
-------------------------------------------------------------------------------
SUM:                            90           4417           7838          10990
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 1297

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.3390/info11020125 is OK
- 10.1016/j.compbiomed.2021.105089 is OK

MISSING DOIs

- 10.1118/1.4752209 may be a valid DOI for title: Quantitative comparison of noise texture across CT scanners
 from different manufacturers

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@arfon arfon removed the waitlisted Submissions in the JOSS backlog due to reduced service mode. label Dec 5, 2023
@MaierOli2010
Copy link

MaierOli2010 commented Dec 5, 2023

Review checklist for @MaierOli2010

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/DIDSR/DICaugment?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@jjmcintosh) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@MaierOli2010
Copy link

Hi all,

thank you for this nice work.
I've just started to look through it and have seen the following issues:

Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?

I did not find any automated testing nor instructions on how to perform testing of the software. There is, however, a tests subfolder so I assume some testing is done in one way or another. Please either state how to perform the testing manually or, ideally, use some automated software to perform testing (like Jenkins or Travis).

Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

There are no guidelines on how to report problems/bugs or how to contribute. Please include an appropriate document such as from this template: https://gist.github.com/briandk/3d2e8b3ec8daf5a27a62 and adapt it to your needs.

While not mandatory, a code of conduct would also be recommended.

Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?

I've also noticed that not all parameters of the API are documented in each function. E.g. always_apply is not documented in all functions at https://dicaugment.readthedocs.io/en/latest/dicaugment.augmentations.html. Please make sure that all parameters of the public API are documented accordingly.

State of the field: Do the authors describe how this software compares to other commonly-used packages?

I do miss a broader overview of existing software for data augmentation. Albumentations is not the only package out there to do image augmentation. Some examples that I quickly found are TorchIO, MONAI, DLTK (also offers some augmentation techniques), MedicalTorch also claims to have some elastic transformations available to augment data. Finally, there is a tutorial from SimpleITK to apply data augmentation for 2D and 3D images available at https://simpleitk.org/SPIE2019_COURSE/03_data_augmentation.html. Please clearly state how you differentiate to these other tools, especially the claim of 3D augmentation seems to be covered at least by SimpleITK.

To this end, I also think that the list of references is not complete as these packages, and the corresponding publications if applicable, are certainly missing.

Contribution and authorship: Has the submitting author (@jjmcintosh) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

While I can clearly see the contribution of the main author and also the last author as mentioned in the readme, the others are not to be found in any commit and are also not mentioned in your own readme. Can you maybe clarify the contributions? I do not doubt the contributions but it is hard to follow if there are no commits and even your own readme does not mention them as contributors.

While also not mandatory, a live example of the software using e.g. Google Colab could greatly enhance the user experience. The provided guides seem sufficient to understand the functionality.

Testing the basic example in the readme gives an error:

import dicaugment as dca
# Define the augmentation pipeline
transform = dca.Compose([
    dca.Rotate(p=0.5, limit=20, interpolation=1),
    dca.RandomCrop(p=0.5, size=(64, 64, 64))
])

TypeError: RandomCrop.init() got an unexpected keyword argument 'size'

This stems from the fact that the transform expects height, width, and depth in its definition.
Please make sure that all examples work as given in the readme or on RTD.

Overall I think the toolbox does offer unique functionality in the context of CT imaging using the NPS for noise augmentation.

Cheers,
Oliver

@osorensen
Copy link
Member

Thanks for your review @MaierOli2010! @jjmcintosh, you're welcome to start addressing these issues already now, and report on progress in this thread.

@kuadrat
Copy link

kuadrat commented Jan 3, 2024

Review checklist for @kuadrat

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/DIDSR/DICaugment?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@jjmcintosh) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@kuadrat
Copy link

kuadrat commented Jan 4, 2024

Congratulations to this great work, @jjmcintosh!

I believe that this project and paper can certainly be published in JOSS. However, there are a few points that first need to be addressed in order for me to be able to check all the boxes.

Critical points

  • Contribution and authorship @jjmcintosh's contributions are evident from the commit logs. Mehdi Farhangi is listed as an author in the README, though it is not evident from the repository nor paper, in which way both authors contributed. The same goes for the other authors listed in the paper. Please specify this, ideally in the README or a CONTRIBUTORS file and in the paper.
  • Automated tests Running the tests after cloning the repository gave 206 failed, 4611 passed. Ensure that all tests pass in the version that is to be published. Also, the source code of the tests contains large numbers of commented out blocks of code, which indicates that some work on that front might still be required.
  • Documentation The tutorial part of the docs is clearly written, easy to follow and enhanced by well-made, relevant images. However, the API documentation contains a lot of undocumented functions and class method. Also, important methods such as the Composition class' __call__ method should be documented and it should be clear from the API doc alone - without the tutorial - that this is the intended way to make use of the class (if my understanding is correct here).
    I can see that most Transform classes have methods of identical names, such as apply, which always serve the same purpose. Maybe this stackoverflow answer) provides some possibilities for re-using documentation.
  • State of the field I agree with the point brought up by @MaierOli2010 regarding the need to specify how DICaugment distinguishes itself from other 3D image augmentation software. I also found an additional package that seems to be quite similar: volumentations.
  • Typos
    • this title is misspelled
    • there's probably a newline missing in the docstring to BBoxSafeRandomCrop
    • paper line 65: the units "mm" need not be italic
    • paper, caption of Fig. 1, second to last line: Siemens is misspelled

Additional points

The following points are not considered to be blocking publication in JOSS. They should be seen as suggestions to further improve the software package and facilitate future development and maintenance.

  • Code coverage In order to assess whether the designed tests actually do a good job at testing the functionality of the code, I recommend the use of a test-coverage estimator, such as codecov
  • Tests I appreciate the fact that a large number of test cases for many different parameter combinations is considered. However, this results in a rather calculation intensive test suite. In development, it can be helpful to run tests often to immediately notice any potential problems that have been introduced. It might therefore be worth it to set up a separate set of tests that is less thorough and can be used regularly during development, e.g. on each commit or push. The complete list of tests could then be run on a less regular basis, e.g. when merging to main or before publishing a new version.
  • Provide example file In order to follow along with some of the tutorials (e.g. mask augmentation), the user is expected to have certain files (like a CT scan file with a DICOM header). While this can reasonably be expected from the target audience, it would still be helpful to provide users with an example file for which you ensure that the tutorial example works. Otherwise, in case of trouble, users will have to figure out if the problem lies in their file or their installation.

@jjmcintosh
Copy link

@kuadrat Thanks! I will get these points addressed soon.

Automated tests Running the tests after cloning the repository gave 206 failed, 4611 passed. Ensure that all tests pass in the version that is to be published. Also, the source code of the tests contains large numbers of commented out blocks of code, which indicates that some work on that front might still be required.

I have a follow-up question here. What system are you using and what is the version of Python? I have recently added a GitHub Action for automated testing and there aren't any failed tests there so I'd like to get to the bottom of where there might be a discrepancy in the outcomes of the tests.

@kuadrat
Copy link

kuadrat commented Jan 8, 2024

@kuadrat Thanks! I will get these points addressed soon.

Automated tests Running the tests after cloning the repository gave 206 failed, 4611 passed. Ensure that all tests pass in the version that is to be published. Also, the source code of the tests contains large numbers of commented out blocks of code, which indicates that some work on that front might still be required.

I have a follow-up question here. What system are you using and what is the version of Python? I have recently added a GitHub Action for automated testing and there aren't any failed tests there so I'd like to get to the bottom of where there might be a discrepancy in the outcomes of the tests.

@jjmcintosh Sure, I've opened DIDSR/DICaugment#5 to keep the discussion on that front contained. I'm happy to try out different python versions, etc. if that can help with getting to the core of the issue.

@jjmcintosh
Copy link

@MaierOli2010 @kuadrat I have responded to your comments below and have made the necessary updates. Please let me know if anything is not satisfactory!

I did not find any automated testing nor instructions on how to perform testing of the software. There is, however, a tests subfolder so I assume some testing is done in one way or another. Please either state how to perform the testing manually or, ideally, use some automated software to perform testing (like Jenkins or Travis).

I have added automated testing to this repo using GitHub Actions. You should be able to manually trigger the testing build in the actions tab in the repo by selecting the automated testing workflow and clicking on Run workflow

There are no guidelines on how to report problems/bugs or how to contribute. Please include an appropriate document such as from this template: https://gist.github.com/briandk/3d2e8b3ec8daf5a27a62 and adapt it to your needs.

A CONTRIBUTING file has been added using that template along with issue templates.

I've also noticed that not all parameters of the API are documented in each function. E.g. always_apply is not documented in all functions at https://dicaugment.readthedocs.io/en/latest/dicaugment.augmentations.html. Please make sure that all parameters of the public API are documented accordingly.

I have updated the documentation to address this.

I do miss a broader overview of existing software for data augmentation. Albumentations is not the only package out there to do image augmentation. Some examples that I quickly found are TorchIO, MONAI, DLTK (also offers some augmentation techniques), MedicalTorch also claims to have some elastic transformations available to augment data. Finally, there is a tutorial from SimpleITK to apply data augmentation for 2D and 3D images available at https://simpleitk.org/SPIE2019_COURSE/03_data_augmentation.html. Please clearly state how you differentiate to these other tools, especially the claim of 3D augmentation seems to be covered at least by SimpleITK.
To this end, I also think that the list of references is not complete as these packages, and the corresponding publications if applicable, are certainly missing.

The paper has been updated with an overview of existing software for data augmentation and how Albumentations differs along with an updated set of references.

While I can clearly see the contribution of the main author and also the last author as mentioned in the readme, the others are not to be found in any commit and are also not mentioned in your own readme. Can you maybe clarify the contributions? I do not doubt the contributions but it is hard to follow if there are no commits and even your own readme does not mention them as contributors.

Author contributions have been added in both the README and the paper.

Testing the basic example in the readme gives an error:

Fixed. Thanks!


Contribution and authorship @jjmcintosh's contributions are evident from the commit logs. Mehdi Farhangi is listed as an author in the README, though it is not evident from the repository nor paper, in which way both authors contributed. The same goes for the other authors listed in the paper. Please specify this, ideally in the README or a CONTRIBUTORS file and in the paper.

Author contributions have been added in both the README and the paper.

Automated tests Running the tests after cloning the repository gave 206 failed, 4611 passed. Ensure that all tests pass in the version that is to be published. Also, the source code of the tests contains large numbers of commented out blocks of code, which indicates that some work on that front might still be required.

Thank you for opening a GitHub issue. I have addressed this issue. Also, the commented-out code blocks have been removed as they are irrelevant to this project.

Documentation The tutorial part of the docs is clearly written, easy to follow and enhanced by well-made, relevant images. However, the API documentation contains a lot of undocumented functions and class method. Also, important methods such as the Composition class' call method should be documented and it should be clear from the API doc alone - without the tutorial - that this is the intended way to make use of the class (if my understanding is correct here).
I can see that most Transform classes have methods of identical names, such as apply, which always serve the same purpose. Maybe this stackoverflow answer) provides some possibilities for re-using documentation.

I have updated the missing docstrings. Each method in the documentation should have some text attached to it. I skewed away from adding arguments to the docstrings of the common methods (e.g apply) as to discourage the user from directly invoking that method and using the Compose pipeline. Please let me know what you think of this decision.

State of the field I agree with the point brought up by @MaierOli2010 regarding the need to specify how DICaugment distinguishes itself from other 3D image augmentation software. I also found an additional package that seems to be quite similar: volumentations.

This has been updated in the paper!

Typos
this title is misspelled
there's probably a newline missing in the docstring to BBoxSafeRandomCrop
paper line 65: the units "mm" need not be italic
paper, caption of Fig. 1, second to last line: Siemens is misspelled

Fixed!

Code coverage In order to assess whether the designed tests actually do a good job at testing the functionality of the code, I recommend the use of a test-coverage estimator, such as codecov

The GitHub actions testing pipeline has coverage!


To-Do For Me

All of the non-required suggestions. I wanted to get the prerequisites out of the way before working on any optionals.

@MaierOli2010
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@kuadrat
Copy link

kuadrat commented Feb 5, 2024

Thanks @jjmcintosh for carefully addressing the points made.
For me, all the publication-critical issues are thus resolved and I would give the "go" for publication.

I would still warmly encourage you to consider the additional points I made above, as well as the below suggestions.

Non-critical

  • Code coverage: Great to see that code coverage is now automatically evaluated in a github workflow! Now that you have that, it would make sense to share the gist of the coverage report in some form in your repo. A common way to do this is by automatically sending the coverage report to codecov and/or to generate a badge for your README that indicates the coverage results (see e.g. here, note also the second answer). This helps users and other developers spot at a glance that you have a well-working testing routine in place.

  • Supported python version: DICaugment supports python versions 3.8.0 and higher, which is explicitly stated in the installation instructions, reflected in your setup.py file and visible on www.pipy.org. Unfortunately, this doesn't prevent users from "pip installing" DICaugment with unsupported python versions. To be even safer and avoid frustration on the user side, you could add something like:

    import sys
    
    if sys.version_info < (3, 8):
        raise Exception("DICaugment requires Python 3.8 or above.")
    

    to the __init__.py file and state the python version requirements in the README as well (potentially in the form of badges). I know this may feel quite redundant at this point, but I believe that it is not too much work and it would be well invested. In my experience, if a user encounters difficulties when downloading and installing software, chances are high that they are discouraged and driven away from ever using the package.

  • Documentation: It still seems to me that Transoformation.apply() (and similar functions) does not seem to be intended way for the package to be used. If this is correct, maybe the dilemma of unnecessarily documenting the same function over again could be avoided by making them private methods?

@MaierOli2010
Copy link

Thank you @jjmcintosh also from my side. No critical issues remain from my point of view and I also consider it a "go" for publication.

@osorensen
Copy link
Member

Thanks to all of you for the fast work, and excuse my delay here.

@jjmcintosh, I'll now read through the manuscript and let you know if I have any suggested changes.

@osorensen
Copy link
Member

@editorialbot generate pdf

@osorensen
Copy link
Member

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.3390/info11020125 is OK
- 10.1016/j.compbiomed.2021.105089 is OK
- 10.1007/s10278-017-0037-8 is OK
- 10.1016/j.cmpb.2021.106236 is OK

MISSING DOIs

- 10.1118/1.4752209 may be a valid DOI for title: Quantitative comparison of noise texture across CT scanners
 from different manufacturers

INVALID DOIs

- None

@jjmcintosh
Copy link

@osorensen See PR for adding that DOI. Just waiting on approval.

@jjmcintosh
Copy link

Thanks @jjmcintosh for carefully addressing the points made. For me, all the publication-critical issues are thus resolved and I would give the "go" for publication.

I would still warmly encourage you to consider the additional points I made above, as well as the below suggestions.

Non-critical

  • Code coverage: Great to see that code coverage is now automatically evaluated in a github workflow! Now that you have that, it would make sense to share the gist of the coverage report in some form in your repo. A common way to do this is by automatically sending the coverage report to codecov and/or to generate a badge for your README that indicates the coverage results (see e.g. here, note also the second answer). This helps users and other developers spot at a glance that you have a well-working testing routine in place.

  • Supported python version: DICaugment supports python versions 3.8.0 and higher, which is explicitly stated in the installation instructions, reflected in your setup.py file and visible on www.pipy.org. Unfortunately, this doesn't prevent users from "pip installing" DICaugment with unsupported python versions. To be even safer and avoid frustration on the user side, you could add something like:

    import sys
    
    if sys.version_info < (3, 8):
        raise Exception("DICaugment requires Python 3.8 or above.")
    

    to the __init__.py file and state the python version requirements in the README as well (potentially in the form of badges). I know this may feel quite redundant at this point, but I believe that it is not too much work and it would be well invested. In my experience, if a user encounters difficulties when downloading and installing software, chances are high that they are discouraged and driven away from ever using the package.

  • Documentation: It still seems to me that Transoformation.apply() (and similar functions) does not seem to be intended way for the package to be used. If this is correct, maybe the dilemma of unnecessarily documenting the same function over again could be avoided by making them private methods?

@kuadrat Working on your suggestions this weekend! The PyPi issue should be resolved now as I have deleted version 1.0.0. You should be able to confirm this by installing in a fresh environment and using the --no-cache-dir flag

@kuadrat
Copy link

kuadrat commented Feb 19, 2024

@kuadrat Working on your suggestions this weekend! The PyPi issue should be resolved now as I have deleted version 1.0.0. You should be able to confirm this by installing in a fresh environment and using the --no-cache-dir flag

I see, so an older version with different dependencies was causing this. I confirm that pip install for python 3.7.5 now throws an error, stating that python>=3.8 is required.

@osorensen
Copy link
Member

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.3390/info11020125 is OK
- 10.1016/j.compbiomed.2021.105089 is OK
- 10.1118/1.4752209 is OK
- 10.1007/s10278-017-0037-8 is OK
- 10.1016/j.cmpb.2021.106236 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@osorensen
Copy link
Member

osorensen commented Feb 20, 2024

Post-Review Checklist for Editor and Authors

Additional Author Tasks After Review is Complete

  • Double check authors and affiliations (including ORCIDs)
  • Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
  • Archive the release on Zenodo/figshare/etc and post the DOI here.
  • Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
  • Make sure that the license listed for the archive is the same as the software license.

Editor Tasks Prior to Acceptance

  • Read the text of the paper and offer comments/corrections (as either a list or a pull request)
  • Check that the archive title, author list, version tag, and the license are correct
  • Set archive DOI with @editorialbot set <DOI here> as archive
  • Set version with @editorialbot set <version here> as version
  • Double check rendering of paper with @editorialbot generate pdf
  • Specifically check the references with @editorialbot check references and ask author(s) to update as needed
  • Recommend acceptance with @editorialbot recommend-accept

@osorensen
Copy link
Member

osorensen commented Feb 20, 2024

@jjmcintosh, could you now please complete the following tasks and report here when done?

  • Double check authors and affiliations (including ORCIDs)
  • Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
  • Archive the release on Zenodo/figshare/etc and post the DOI here.
  • Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
  • Make sure that the license listed for the archive is the same as the software license.

@jjmcintosh
Copy link

Hi all, still waiting on approvals from the GitHub Organization owners for the third-party OAuth apps (codecov and Zenodo). After those are approved, I will make a new release with those changes and post the version number and DOI here.

  • Double check authors and affiliations (including ORCIDs)
  • Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
  • Archive the release on Zenodo/figshare/etc and post the DOI here.
  • Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
  • Make sure that the license listed for the archive is the same as the software license.

@jjmcintosh
Copy link

jjmcintosh commented Mar 1, 2024

  • Double check authors and affiliations (including ORCIDs)
  • Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
  • Archive the release on Zenodo/figshare/etc and post the DOI here.
  • Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
  • Make sure that the license listed for the archive is the same as the software license.

Version Number: v1.0.7
DOI: 10.5281/zenodo.10738855

@osorensen
Copy link
Member

Thanke @jjmcintosh. Can you please change the title og the Zenodo archive so it exactly matches that of the paper?

@jjmcintosh
Copy link

@osorensen Updated! I misinterpreted the instructions and did not realize that the Zenodo archive had metadata attached to it that needed to be updated.

@osorensen
Copy link
Member

@editorialbot set 10.5281/zenodo.10738855 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.10738855

@osorensen
Copy link
Member

@editorialbot set v1.0.7 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v1.0.7

@osorensen
Copy link
Member

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.3390/info11020125 is OK
- 10.1016/j.compbiomed.2021.105089 is OK
- 10.1118/1.4752209 is OK
- 10.1007/s10278-017-0037-8 is OK
- 10.1016/j.cmpb.2021.106236 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5084, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Mar 2, 2024
@arfon
Copy link
Member

arfon commented Mar 3, 2024

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: McIntosh
  given-names: J.
  orcid: "https://orcid.org/0009-0008-2573-180X"
- family-names: Cao
  given-names: Qian
- family-names: Sahiner
  given-names: Berkman
- family-names: Petrick
  given-names: Nicholas
- family-names: Farhangi
  given-names: M. Mehdi
contact:
- family-names: McIntosh
  given-names: J.
  orcid: "https://orcid.org/0009-0008-2573-180X"
doi: 10.5281/zenodo.10738855
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: McIntosh
    given-names: J.
    orcid: "https://orcid.org/0009-0008-2573-180X"
  - family-names: Cao
    given-names: Qian
  - family-names: Sahiner
    given-names: Berkman
  - family-names: Petrick
    given-names: Nicholas
  - family-names: Farhangi
    given-names: M. Mehdi
  date-published: 2024-03-03
  doi: 10.21105/joss.06120
  issn: 2475-9066
  issue: 95
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 6120
  title: "DICaugment: A Python Package for 3D Medical Imaging
    Augmentation"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.06120"
  volume: 9
title: "DICaugment: A Python Package for 3D Medical Imaging
  Augmentation"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.06120 joss-papers#5085
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.06120
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Mar 3, 2024
@arfon
Copy link
Member

arfon commented Mar 9, 2024

@MaierOli2010, @kuadrat – many thanks for your reviews here and to @osorensen for editing this submission! JOSS relies upon the volunteer effort of people like you and we simply wouldn't be able to do this without you ✨

@jjmcintosh – your paper is now accepted and published in JOSS ⚡🚀💥

@arfon arfon closed this as completed Mar 9, 2024
@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.06120/status.svg)](https://doi.org/10.21105/joss.06120)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.06120">
  <img src="https://joss.theoj.org/papers/10.21105/joss.06120/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.06120/status.svg
   :target: https://doi.org/10.21105/joss.06120

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

6 participants