Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: matbench-genmetrics: A Python library for benchmarking crystal structure generative models using time-based splits of Materials Project structures #5618

Open
editorialbot opened this issue Jul 4, 2023 · 94 comments
Assignees
Labels
Dockerfile Jupyter Notebook Mathematica recommend-accept Papers recommended for acceptance in JOSS. review Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Jul 4, 2023

Submitting author: @sgbaird (Sterling Baird)
Repository: https://github.com/sparks-baird/matbench-genmetrics
Branch with paper.md (empty if default branch):
Version: v0.6.5
Editor: @phibeck
Reviewers: @ml-evs, @mkhorton, @jamesrhester
Archive: 10.5281/zenodo.10840604

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/7ba5a67474e60ef6927a8635354b8546"><img src="https://joss.theoj.org/papers/7ba5a67474e60ef6927a8635354b8546/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/7ba5a67474e60ef6927a8635354b8546/status.svg)](https://joss.theoj.org/papers/7ba5a67474e60ef6927a8635354b8546)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@ml-evs & @mkhorton & @jamesrhester, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @phibeck know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @ml-evs

📝 Checklist for @jamesrhester

📝 Checklist for @mkhorton

@editorialbot editorialbot added Dockerfile Jupyter Notebook Mathematica review Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials labels Jul 4, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.08 s (755.2 files/s, 365581.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          26            704           1254           1847
Jupyter Notebook                 5              0          21769           1194
Markdown                        12            208              0            679
TeX                              6             28              0            378
YAML                             5             21             72            209
INI                              1             11              0             83
Dockerfile                       1             15             23             26
make                             1              6              8             15
TOML                             1              1              3              5
JSON                             1              0              0              1
-------------------------------------------------------------------------------
SUM:                            59            994          23129           4437
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 1122

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.26434/chemrxiv-2022-6l4pm is OK
- 10.1038/s41467-019-10030-5 is OK
- 10.21105/joss.04528 is OK
- 10.1021/acs.jcim.8b00839 is OK
- 10.1038/s43588-022-00349-3 is OK
- 10.1038/s41524-020-00406-3 is OK
- 10.1063/1.4812323 is OK
- 10.1016/j.commatsci.2012.10.028 is OK
- 10.1038/s41598-022-08413-8 is OK
- 10.3389/fphar.2020.565644 is OK
- 10.1016/j.matt.2021.11.032 is OK
- 10.1107/S2056989019016244 is OK
- 10.1038/s41586-019-1335-8 is OK
- 10.1002/advs.202100566 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@jamesrhester
Copy link

jamesrhester commented Jul 4, 2023

Review checklist for @jamesrhester

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/sparks-baird/matbench-genmetrics?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@sgbaird) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@ml-evs
Copy link

ml-evs commented Jul 4, 2023

Review checklist for @ml-evs

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/sparks-baird/matbench-genmetrics?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@sgbaird) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@phibeck
Copy link

phibeck commented Jul 14, 2023

@jamesrhester, @ml-evs, @mkhorton, thanks again for reviewing! Just checking in on your review. Please let me know if you have any questions about the process. Feel free to create issues in project repository directly or write them down as comments here, but please do link the issues in this review so it's easy to follow for everyone. @mkhorton, please go ahead and create your checklist first using the command @editorialbot generate my checklist.

@mkhorton
Copy link

mkhorton commented Jul 15, 2023

Review checklist for @mkhorton

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/sparks-baird/matbench-genmetrics?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@sgbaird) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@phibeck
Copy link

phibeck commented Jul 24, 2023

👋 @jamesrhester , @mkhorton, please update us on how it's going with your reviews when you find the time

@mkhorton
Copy link

Appreciate the reminder @phibeck, thank you -- on my radar, can't believe it's almost been a month already since agreeing to review!

@jamesrhester
Copy link

Likewise!

@phibeck
Copy link

phibeck commented Aug 10, 2023

Thank you all for getting the review started! As you work through your checklists, please feel free to comment and ask questions in this thread. You are encouraged to create issues in the repository directly. When you do, please mention openjournals/joss-reviews#5618 so that it creates a link in this thread and we can keep track of it.

Please let me know if you have any questions or if either of you requires some more time.

@mkhorton
Copy link

Hi @phibeck, running through the checklist. Unfortunately I may have a conflict of interest: I have previously been on publications with the second author, Joseph Montoya. We worked together in the same research group until 2018 (i.e. outside the four year window in the COI policy), but the most recent paper we were on together actually only came out in 2021.

@danielskatz
Copy link

If the work was done more then 4 years, but the paper appeared later, this is not a conflict for JOSS. The four years is about the collaborative relationship itself.

@phibeck
Copy link

phibeck commented Aug 18, 2023

Thank you, @danielskatz, for clarifying this question. Sounds like you don't have a COI here, @mkhorton.

@ml-evs
Copy link

ml-evs commented Aug 22, 2023

Just a heads-up (mostly for @phibeck) that I will restart my review on this, and will continue collecting small things in my old issue at sparks-baird/matbench-genmetrics#80 which wasn't previously linked here.

@phibeck
Copy link

phibeck commented Sep 4, 2023

@jamesrhester, @mkhorton & @ml-evs - could you provide an update on the progress of your review? Thank you!

@jamesrhester
Copy link

Getting onto this now. We've just had a big crystallography meeting that took up a lot of my cycles...

@jamesrhester
Copy link

As a crystallographer but non ML specialist, I found the "Statement of Need" lacked context. Line 22 in the paper made no sense to me, which is not good for the first sentence. I would therefore like one or two further sentences added at the beginning explaining how ML uses benchmarks, e.g "in ML, the result of a prediction is evaluated using benchmarks, which are then used to adjust the ML weights. Typically, a crystal structure ML model has used benchmarks from...." (which might show I have no idea what I'm talking about). I think this will better help readers to quickly determine whether or not the paper is relevant to them.

Once this is done I'm ready to sign off on my review.

@phibeck
Copy link

phibeck commented Sep 5, 2023

Great, thanks for the update, @jamesrhester. @sgbaird, feel free to get started working on the comments and issues linked here by @jamesrhester and @ml-evs. Please update us here in this issue about the progress so we can keep track of the changes.

@phibeck
Copy link

phibeck commented Sep 20, 2023

👋 @sgbaird could you let us know where you stand with responding to the comments of the reviewers?

@mkhorton and @ml-evs, it would be great to get an update from your side as well regarding the remaining points on your checklists. Thank you!

@phibeck
Copy link

phibeck commented May 8, 2024

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.26434/chemrxiv-2022-6l4pm is OK
- 10.1038/s41467-019-10030-5 is OK
- 10.21105/joss.04528 is OK
- 10.1021/acs.jcim.8b00839 is OK
- 10.1038/s43588-022-00349-3 is OK
- 10.1038/s41524-020-00406-3 is OK
- 10.1063/1.4812323 is OK
- 10.1016/j.commatsci.2012.10.028 is OK
- 10.1038/s41598-022-08413-8 is OK
- 10.3389/fphar.2020.565644 is OK
- 10.1016/j.matt.2021.11.032 is OK
- 10.1107/S2056989019016244 is OK
- 10.1038/s41586-019-1335-8 is OK
- 10.1002/advs.202100566 is OK
- 10.48550/arXiv.2306.11688 is OK
- 10.48550/arXiv.2308.14920 is OK

MISSING DOIs

- No DOI given, and none found for title: Scikit-Learn: Machine Learning in Python
- No DOI given, and none found for title: Crystal Diffusion Variational Autoencoder for Peri...
- No DOI given, and none found for title: Physics Guided Generative Adversarial Networks for...

INVALID DOIs

- None

@phibeck
Copy link

phibeck commented May 8, 2024

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@phibeck
Copy link

phibeck commented May 8, 2024

@sgbaird thanks! Here are a few more comments/suggestions for the manuscript. Please have a look when you have a moment.

  • line 12: I would omit the square brackets since the citation seems part of the sentence
  • line 27: the acronyms aren't used in the remainder of the manuscript, so it doesn't seem necessary to introduce them here
  • line 57: perhaps you could introduce a linebreak in front of matbench_genmetrics.core since it exceeds the linewidth (not sure why this isn't done automatically..)
  • line 165ff: it seems this reference has been published in the meantime, please update the reference accordingly (https://www.nature.com/articles/s41524-023-00987-9)

Since I cannot find a record of the last two co-authors' contribution in your repository, could you please state their contributions here for the record of review? You can check out the guidance for authorship here: https://joss.readthedocs.io/en/latest/submitting.html#authorship Thanks!

@phibeck
Copy link

phibeck commented May 8, 2024

@editorialbot set v0.6.5 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v0.6.5

@phibeck
Copy link

phibeck commented May 8, 2024

@editorialbot set 10.5281/zenodo.10840604 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.10840604

@sgbaird
Copy link

sgbaird commented May 9, 2024

@sgbaird thanks! Here are a few more comments/suggestions for the manuscript. Please have a look when you have a moment.

  • line 12: I would omit the square brackets since the citation seems part of the sentence

Done!

  • line 27: the acronyms aren't used in the remainder of the manuscript, so it doesn't seem necessary to introduce them here

Agreed, removed

  • line 57: perhaps you could introduce a linebreak in front of matbench_genmetrics.core since it exceeds the linewidth (not sure why this isn't done automatically..)

EDIT: Changed the wording to get the linebreak

Updated!

Since I cannot find a record of the last two co-authors' contribution in your repository, could you please state their contributions here for the record of review? You can check out the guidance for authorship here: joss.readthedocs.io/en/latest/submitting.html#authorship Thanks!

@JosephMontoya-TRI supplied code and an implementation related to the mp-time-split portion.
@sp8rks participated in the ideation and development / funding, vision, etc.

sgbaird added a commit to sparks-baird/matbench-genmetrics that referenced this issue May 9, 2024
sgbaird added a commit to sparks-baird/matbench-genmetrics that referenced this issue May 9, 2024
sgbaird added a commit to sparks-baird/matbench-genmetrics that referenced this issue May 9, 2024
sgbaird added a commit to sparks-baird/matbench-genmetrics that referenced this issue May 9, 2024
@sgbaird
Copy link

sgbaird commented May 9, 2024

Had trouble getting the line break, so I updated the wording slightly to get it instead: sparks-baird/matbench-genmetrics@d05ead2

@phibeck
Copy link

phibeck commented May 9, 2024

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@phibeck
Copy link

phibeck commented May 9, 2024

Hi @sgbaird

EDIT: Changed the wording to get the linebreak

Looks good, thanks!

Updated!

It seems that the reference zhao_physics_2023 didn't make it into the .bib file, could you please push this last change? Thanks!

@JosephMontoya-TRI supplied code and an implementation related to the mp-time-split portion. @sp8rks participated in the ideation and development / funding, vision, etc.

Okay, thank you for clarifying!

@sgbaird
Copy link

sgbaird commented May 9, 2024

Sorry about that. Not sure what happened there. I added it in just now. Does it look ok?

@phibeck
Copy link

phibeck commented May 10, 2024

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@phibeck
Copy link

phibeck commented May 10, 2024

Sorry about that. Not sure what happened there. I added it in just now. Does it look ok?

No problem. Looks good now, thanks!

@phibeck
Copy link

phibeck commented May 10, 2024

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41524-023-00987-9 is OK
- 10.26434/chemrxiv-2022-6l4pm is OK
- 10.1038/s41467-019-10030-5 is OK
- 10.21105/joss.04528 is OK
- 10.1021/acs.jcim.8b00839 is OK
- 10.1038/s43588-022-00349-3 is OK
- 10.1038/s41524-020-00406-3 is OK
- 10.1063/1.4812323 is OK
- 10.1016/j.commatsci.2012.10.028 is OK
- 10.1038/s41598-022-08413-8 is OK
- 10.3389/fphar.2020.565644 is OK
- 10.1016/j.matt.2021.11.032 is OK
- 10.1107/S2056989019016244 is OK
- 10.1038/s41586-019-1335-8 is OK
- 10.1002/advs.202100566 is OK
- 10.48550/arXiv.2306.11688 is OK
- 10.48550/arXiv.2308.14920 is OK

MISSING DOIs

- No DOI given, and none found for title: Scikit-Learn: Machine Learning in Python
- No DOI given, and none found for title: Crystal Diffusion Variational Autoencoder for Peri...
- No DOI given, and none found for title: Physics Guided Generative Adversarial Networks for...

INVALID DOIs

- None

@phibeck
Copy link

phibeck commented May 10, 2024

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41524-023-00987-9 is OK
- 10.26434/chemrxiv-2022-6l4pm is OK
- 10.1038/s41467-019-10030-5 is OK
- 10.21105/joss.04528 is OK
- 10.1021/acs.jcim.8b00839 is OK
- 10.1038/s43588-022-00349-3 is OK
- 10.1038/s41524-020-00406-3 is OK
- 10.1063/1.4812323 is OK
- 10.1016/j.commatsci.2012.10.028 is OK
- 10.1038/s41598-022-08413-8 is OK
- 10.3389/fphar.2020.565644 is OK
- 10.1016/j.matt.2021.11.032 is OK
- 10.1107/S2056989019016244 is OK
- 10.1038/s41586-019-1335-8 is OK
- 10.1002/advs.202100566 is OK
- 10.48550/arXiv.2306.11688 is OK
- 10.48550/arXiv.2308.14920 is OK

MISSING DOIs

- No DOI given, and none found for title: Scikit-Learn: Machine Learning in Python
- No DOI given, and none found for title: Crystal Diffusion Variational Autoencoder for Peri...
- No DOI given, and none found for title: Physics Guided Generative Adversarial Networks for...

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/bcm-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5342, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label May 10, 2024
@sgbaird
Copy link

sgbaird commented May 15, 2024

I guess the Alverson reference needs to be updated to the published version. Will try to address shortly - no worries if too late.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Dockerfile Jupyter Notebook Mathematica recommend-accept Papers recommended for acceptance in JOSS. review Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials
Projects
None yet
Development

No branches or pull requests

9 participants