Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: CRATE: A Python package to perform fast material simulations #5594

Closed
editorialbot opened this issue Jun 26, 2023 · 83 comments
Closed
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Jun 26, 2023

Submitting author: @BernardoFerreira (Bernardo P. Ferreira)
Repository: https://github.com/bessagroup/CRATE
Branch with paper.md (empty if default branch): master
Version: v1.0.3
Editor: @Kevin-Mattheus-Moerman
Reviewers: @RahulSundar, @atzberg, @Extraweich, @kingyin3613
Archive: 10.5281/zenodo.8199879

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/4d97bd964987a3011d9cdd4e6e1e6389"><img src="https://joss.theoj.org/papers/4d97bd964987a3011d9cdd4e6e1e6389/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/4d97bd964987a3011d9cdd4e6e1e6389/status.svg)](https://joss.theoj.org/papers/4d97bd964987a3011d9cdd4e6e1e6389)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@RahulSundar & @atzberg & @Extraweich & @kingyin3613, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @Kevin-Mattheus-Moerman know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @RahulSundar

📝 Checklist for @atzberg

📝 Checklist for @Extraweich

📝 Checklist for @kingyin3613

@editorialbot editorialbot added Python review Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials labels Jun 26, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.41 s (935.5 files/s, 190431.7 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
SVG                              6              8              6          41535
Python                          52           1704          16088          11024
reStructuredText               316           3457           2895           1053
TeX                              1              8              0            101
Markdown                         2             42              0             81
YAML                             2              8              4             52
DOS Batch                        1              8              1             26
make                             1              4              7              9
CSS                              2              7             27              6
TOML                             1              0              0              3
-------------------------------------------------------------------------------
SUM:                           384           5246          19028          53890
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 631

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.cma.2016.04.004 is OK
- 10.1016/j.cma.2017.03.037 is OK
- 10.1016/j.cma.2022.114726 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41592-019-0686-2 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@Extraweich
Copy link

Extraweich commented Jun 26, 2023

Review checklist for @Extraweich

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/bessagroup/CRATE?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@BernardoFerreira) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@RahulSundar
Copy link

RahulSundar commented Jun 26, 2023

Review checklist for @RahulSundar

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/bessagroup/CRATE?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@BernardoFerreira) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@kingyin3613
Copy link

kingyin3613 commented Jun 27, 2023

Review checklist for @kingyin3613

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/bessagroup/CRATE?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@BernardoFerreira) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@Extraweich
Copy link

Extraweich commented Jun 27, 2023

Hi @BernardoFerreira, very interesting project and good documentation. Since the check-list requires it, I think you did not add the statement of need to your Docs. You have one in the paper, though. I think it becomes clear what your Python package is used for, if one has the theoretical background. Could you please add an explicit statement of need to your Docs also?
If I am correct, you do not have any automated tests in the Github workflow. I've only started to look at the examples (benchmarks) that you provide in your repository, so I haven't found any comparisons with analytical results or some sort of verification of the given results. Could you point them out please?
Edit: I found the benchmark comparison in your dissertation pp. 174 ff., which provide a verification of the implemented methods.

Nevertheless, there seems to be no automated testing in the workflow. @Kevin-Mattheus-Moerman, how is this usually judged?

@BernardoFerreira
Copy link

Hello @Extraweich,

Thank you for your kind words and for taking the time to review this project!

Concerning the points you raised:

  • Statement of Need:

    • Thank you for pointing this out, I completely missed that! I'll add a concise Statement of Need to Docs (both GitHub and Sphinx front pages) as you suggested.
  • Automated tests:

    • You are right, at the moment I don't have an automated testing suit in the workflow, but that is definitely a next step. Can you recommend me any sources that I may follow to properly set one in a standard way?
    • In the context of multi-scale simulations of heterogeneous materials, analytical solutions are rarely available (perhaps for very simple microstructures and constitutive linear behavior). Therefore, the validation of reduced-order models is usually done by comparison with the solution obtained with a Direct Numerical Simulation (DNS), e.g., Finite Element Method, which is taken as the 'ground-truth' or 'high-fidelity solution'. In this context, I provide the DNS solution of each benchmark for validation purposes (as explained here). Although an automated test suit to assert the different functions and methods is definitely important, these benchmarks are fundamental to assess the whole simulation process and solution;
    • From what I read in the JOSS review criteria, the automated tests are strongly encouraged (GOOD), but properly documented and reproducible benchmarks to assert the behavior seem to render an OK.

@BernardoFerreira
Copy link

@Extraweich, I added the Statement of Need as you requested (you may check it here and here)!

@Extraweich
Copy link

Extraweich commented Jun 28, 2023

Thank you for updating the statement of need, @BernardoFerreira. This is fully satisfactionary to me. Concerning the automated testing, I will wait for what @Kevin-Mattheus-Moerman has to say, since it is the first time I review for JOSS. I would say that I agree with your statement that analytical results are rare and I would think that the benchmark comparisons with DNS results are sufficient.

Another point on my list concerning "State of the field: Do the authors describe how this software compares to other commonly-used packages?":

Your paper does not really compare the functionality of CRATE with other software packages out there. This may be due to the fact, that your SCA and ASCA approaches seem to be novel within the open source community. A quick search on the internet brought FFTHomPy and fibergen to my attention. They at least share the FFT-based solution of the Lippmann-Schwinger equation with your package. I think it would be worth mentioning a few words about related packages (e.g. the packages mentioned above) and highlight what these packages do not offer, but CRATE offers.

Lastly "Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support":

On your Git repository you clearly state how to report issues with the software. Is it intentional that you do not want to invite others to contribute? If not, you could quickly add a "CONTRIBUTING.md" to your repository. You can check my repository if you need an example.

:)

@Kevin-Mattheus-Moerman
Copy link
Member

@BernardoFerreira on automated testing. It is not a strict requirement to have fully automated testing but it is highly encouraged. What is required at a minimum is that a user can run a testing suite (e.g. manually) to compare to known results. In this case though I would encourage you to explore adding automated unit tests for instance.

You mention that comparison to analytical tests may be challenging. However, it should be relatively straightforward for you to add some sort of verification and/or validation-type tests, which are usually “integration” tests because they test the overall behavior of the software, the typical sort of automated tests that are standard in Python packages (and of course other languages) are unit tests, which confirm that individual functions are working properly.

So although I agree a full "test" e.g. of something like an entire CFD or FEA package may be challenging, it should be much easier to confirm that individual functions are working properly given various known inputs.

So, given the above, I would encourage you to include some form of automated functionality testing / unit tests.

Here are some resources our board members suggested when they helped me formulate this response:
https://github.com/jbusecke/cookiecutter-science-project
https://realpython.com/python-testing/
https://caam37830.github.io/book/09_computing/unittest.html
https://coderefinery.github.io/testing/

@Extraweich perhaps you can comment on the nature of the benchmarking comparisons you mentioned. Are these manual evaluations for large functional blocks (combinations) of code? Do you agree that additional and automated testing of the behaviour of functional units, as I allude to above, should still be achievable in this case?

@Extraweich
Copy link

The package offers a fast way of calculating homogenization results in inhomogeneous materials using FFT-based methods. The benchmarks compare these results with more classical approaches, such as FEM calculations (direct numerical simulations -> DNS). Personally, I think enough comparisons are given to proof sufficient agreement with the DNS results.
Nevertheless, I think that @Kevin-Mattheus-Moerman has a valid point in implementing some unit tests. @BernardoFerreira, you could use a couple of your major functions and compare them against known results. I did not scan your code completely, but if you have transforming/back-transforming functions (e.g. time domain -> frequency domain -> time domain), it would be a good start to test them for consistency, e.g. $f^{-1}(f(a))\overset{!}{=}a$.

@BernardoFerreira
Copy link

Hello @Extraweich,

Concerning the points you raised:

  • Comparison with other packages:

    • Yes, I believe that there are no other open-source software packages to perform clustering-based reduced-order simulations of materials out there and against which I can compare the performance or functionality of CRATE;
    • Thank you for pointing these open-source projects that I didn't know about! In fact, I'm quite familiar with the authors (researchers) work. To the best of my knowledge, these methods are Direct Numerical Simulation (DNS) methods (similar to FEM) based on a FFT-based homogenization approach. Therefore, it doesn't make sense to compare the functionality/performance directly with CRATE. Nonetheless, these methods can be made available in CRATE's interface "DNS Solver", similar to the FFT-based homogenization basic scheme that is already implemented there! I'll add them to the documentation and search for other related packages as well - does this answer your point?
  • Contributing to the software:

    • Given this is the first time I'm publishing an open-source software, I'm still not sure what is the most effective way to promote "anonymous" contributions (some contributions are already being prepared from some of my coworkers though!). I will take a look into your repository example and add a CONTRIBUTING.md file to my project (even if simple) as soon as I go through these first contributions - does this answer your point?
  • Automated testing (also @Kevin-Mattheus-Moerman):

    • I completely agree with both of you and I know that including automated functionality testing / unit tests is in fact a standard (and recommended) practice in general software projects. Given the limited amount of time that I have at the moment due to other projects, this is something that I'm planning to implement sequentially as I find time to go through the different modules and build a robust unit testing set for each of them (thank you for providing me those sources @Kevin-Mattheus-Moerman, they will definitely be useful!).
    • This being said, I would like to reinforce that most of CRATE classes/methods are not very prone to be "unit tested", given that they operate on fairly complex data structures and are not meant to be used outside of a full simulation. Hence, although I'm surely able to implement some unit testing (previous point) in some parts of the code, I believe that the full validation against DNS simulations is the primary source of the code's trusthworthiness in this particular case. As we keep developing different research projects with this code, the idea is to keep adding more benchmarks with different application cases (e.g., different materials, microstructures, loadings), the corresponding 'reference' results (obtained with DNS methods) and, if available, results from the literature - this provides validation means and reproducibility at the same time!

@Extraweich, once again thank you for taking your time to review this project. Please let me know your thoughts on the points above!

@Extraweich
Copy link

@BernardoFerreira:

  • Comparison with other packages:
    In my opinion, it would still be good to mention at least a few relevant packages in the paper, even if their scopes are somewhat different. I would say that the scope (homogenization in materials science / engineering) and the mathematics behind it (FFT-based solution of the Lippmann-Schwinger equation) are similar enough to warrant a comparison. When comparing these packages, it would be a good idea to highlight how CRATE is different (clustering-based model reduction) and what this functionality allows the user to do that they could not do with other packages. Would you agree with me on this point, or am I wrong in assuming that the packages have great similarities up to a point?

  • Contributing to the software:
    I understand that it is a bit difficult to contribute to your software, but I would still give clear instructions to interested users on how to contribute. So if you would add a CONTRIBUTING.md I would be fully satisfied. :)

  • Automated testing:
    I also see that the functionality of your code is complex, which makes unit testing a bit difficult. Since @Kevin-Mattheus-Moerman stated, that automated testing is not a strict necessity and given the benchmarks in your repository and your PHD thesis, I would check the given point from my list. I would encourage you to add some unit tests in the future, though.


Another thought that came to mind and has nothing to do with the publishing process here. I personally am not very familiar with the methods used in CRATE. I have some basic knowledge of FFT-based solutions to the Lippmann-Schwinger equation, since Matti Schneider (whom you cite once or twice in your dissertation) is remotely involved in my own dissertation project. Nevertheless, I found it somewhat tedious to fully grasp the potential of CRATE. That's because it's certainly a complicated topic. You give a lot of well-chosen benchmarks, but they kind of run themselves, and the user might feel a bit overwhelmed by the abundance of results. What do you think about adding a Jupyter notebook with a very simple step-by-step example to give the user the opportunity to more easily understand how she/he can incorporate CRATE's functionality into her/his specific research challenges?

@BernardoFerreira
Copy link

@Extraweich :

Thank you for your comments and suggestions!

  • Comparison with other packages:

    • Updated the paper as you suggested, including references to both open-source projects that you mentioned and highlighting the difference with respect to CRATE.
  • Contributing to the software:

    • Updated the repository as you suggested, including both CONTRIBUTING.md and CODE_OF_CONDUCT.md files.
  • Automated testing:

    • Thank you for your comprehension. Adding some unit tests is definitely on the task list!

Regarding your suggestion:

  • I'm glad that you mention this because I'm currently preparing a short course (based on jupyter notebooks) with that goal!
  • Nonetheless, and given that you may also want to take advantage of CRATE in your research, the goal of the section "BASIC USAGE" on the Docs is to guide you through the diferent steps to build your own simulations. All the benchmark simulations follow exactly that step-by-step process. The "ADVANCED SECTION" further explains how you can customize the actual code by including new features related with your research (e.g., new material constitutive models).

@Extraweich
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@Extraweich
Copy link

I have checked off all the items from the list. Thank you for your cooperation and for developing and providing CRATE to the open source community @BernardoFerreira.

@BernardoFerreira
Copy link

Thank you for your review and for helping me improving this project @Extraweich!

@Kevin-Mattheus-Moerman
Copy link
Member

@RahulSundar, @atzberg, @kingyin3613 can you please provide an update on review progress? Please tick any boxes you think can be ticked. If some cannot be ticked yet please give the authors some comments here, or in dedicated issues on their software repo, on what actions should be taken. Thanks!

@Kevin-Mattheus-Moerman
Copy link
Member

@atzberg, are you able to get started? You can call @editorialbot generate my checklist here to get set-up. Thanks.

@atzberg
Copy link

atzberg commented Jul 11, 2023 via email

@Kevin-Mattheus-Moerman
Copy link
Member

@atzberg it says Email replies do not support Markdown, if you instead call @editorialbot generate my checklist here in a comment you get a proper list.

@Kevin-Mattheus-Moerman
Copy link
Member

@editorialbot set 1.0.3 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now 1.0.3

@Kevin-Mattheus-Moerman
Copy link
Member

@editorialbot set 10.5281/zenodo.8199879 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.8199879

@Kevin-Mattheus-Moerman
Copy link
Member

@editorialbot recommend accept

@editorialbot
Copy link
Collaborator Author

I'm sorry human, I don't understand that. You can see what commands I support by typing:

@editorialbot commands

@Kevin-Mattheus-Moerman
Copy link
Member

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.cma.2016.04.004 is OK
- 10.1016/j.cma.2017.03.037 is OK
- 10.1016/j.cma.2022.114726 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.21105/joss.01027 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@Kevin-Mattheus-Moerman
Copy link
Member

@editorialbot set v1.0.3 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v1.0.3

@Kevin-Mattheus-Moerman
Copy link
Member

@BernardoFerreira the version tag on your repository is actually v1.0.3, not 1.0.3. Could you update the Zenodo archive to have the v as well? We need to use the official GitHub repository tag. Thanks

@editorialbot
Copy link
Collaborator Author

⚠️ Error preparing paper acceptance.

@Kevin-Mattheus-Moerman
Copy link
Member

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@Kevin-Mattheus-Moerman
Copy link
Member

@editorialbot recommend-accept

@BernardoFerreira
Copy link

@BernardoFerreira the version tag on your repository is actually v1.0.3, not 1.0.3. Could you update the Zenodo archive to have the v as well? We need to use the official GitHub repository tag. Thanks

Sorry about that, fixed it!

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/bcm-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4440, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Jul 31, 2023
@BernardoFerreira
Copy link

👋 @openjournals/bcm-eics, this paper is ready to be accepted and published.

Check final proof point_rightpage_facing_up Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4440, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@Kevin-Mattheus-Moerman, checked the final proof, didn't find any issues!

@Kevin-Mattheus-Moerman
Copy link
Member

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Ferreira
  given-names: Bernardo P.
  orcid: "https://orcid.org/0000-0001-5956-3877"
- family-names: Pires
  given-names: F. M. Andrade
  orcid: "https://orcid.org/0000-0002-4802-6360"
- family-names: Bessa
  given-names: Miguel A.
  orcid: "https://orcid.org/0000-0002-6216-0355"
contact:
- family-names: Bessa
  given-names: Miguel A.
  orcid: "https://orcid.org/0000-0002-6216-0355"
doi: 10.5281/zenodo.8199879
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Ferreira
    given-names: Bernardo P.
    orcid: "https://orcid.org/0000-0001-5956-3877"
  - family-names: Pires
    given-names: F. M. Andrade
    orcid: "https://orcid.org/0000-0002-4802-6360"
  - family-names: Bessa
    given-names: Miguel A.
    orcid: "https://orcid.org/0000-0002-6216-0355"
  date-published: 2023-07-31
  doi: 10.21105/joss.05594
  issn: 2475-9066
  issue: 87
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5594
  title: "CRATE: A Python package to perform fast material simulations"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05594"
  volume: 8
title: "CRATE: A Python package to perform fast material simulations"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05594 joss-papers#4441
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05594
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Jul 31, 2023
@Kevin-Mattheus-Moerman
Copy link
Member

@BernardoFerreira congratulations on this publication in JOSS!!

I'd like to express my gratitude to the reviewers @RahulSundar, @atzberg, @Extraweich, @kingyin3613!! Thanks for processing this one so smoothly.

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.05594/status.svg)](https://doi.org/10.21105/joss.05594)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.05594">
  <img src="https://joss.theoj.org/papers/10.21105/joss.05594/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.05594/status.svg
   :target: https://doi.org/10.21105/joss.05594

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials
Projects
None yet
Development

No branches or pull requests

8 participants