Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: SeqMetrics : a unified library for performance metrics calculation in python #6450

Closed
editorialbot opened this issue Mar 7, 2024 · 85 comments
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 7 (CSISM) Computer science, Information Science, and Mathematics

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Mar 7, 2024

Submitting author: @AtrCheema (Ather Abbas)
Repository: https://github.com/AtrCheema/SeqMetrics
Branch with paper.md (empty if default branch): master
Version: v2.0.0
Editor: @mstimberg
Reviewers: @FATelarico, @y1my1, @SkafteNicki
Archive: 10.5281/zenodo.12958902

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16"><img src="https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16/status.svg)](https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@FATelarico & @y1my1 & @SkafteNicki, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mstimberg know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @FATelarico

📝 Checklist for @SkafteNicki

📝 Checklist for @y1my1

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.06 s (515.4 files/s, 183199.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          12           1577           3313           3905
Markdown                         3            168              0            825
YAML                             4             14             13             86
reStructuredText                 7             52            187             55
TeX                              1              5              0             54
DOS Batch                        1              8              1             26
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                            29           1828           3521           4960
-------------------------------------------------------------------------------

Commit count by author:

    60	AtrCheema
     7	Sara-Iftikhar
     6	Ather Abbas
     4	FazilaRubab
     1	The Codacy Badger

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 1026

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1029/2007JD008972 is OK
- 10.48550/arXiv.1809.03006 is OK

MISSING DOIs

- 10.1163/2214-8647_dnp_e612900 may be a valid DOI for title: Keras
- No DOI given, and none found for title: Scikit-learn: Machine learning in Python

INVALID DOIs

- 10.21105/joss.041012 is INVALID

@editorialbot
Copy link
Collaborator Author

License info:

🟡 License found: GNU General Public License v3.0 (Check here for OSI approval)

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@mstimberg
Copy link

👋🏼 @AtrCheema, @FATelarico, @y1my1, @SkafteNicki this is the review thread for the paper. All of our communications will happen here from now on.

As a reviewer, the first step is to create a checklist for your review by entering

@editorialbot generate my checklist

as the top of a new comment in this thread.

There are additional guidelines in the message at the start of this issue.

Please feel free to ping me (@mstimberg) if you have any questions/concerns.

@SkafteNicki
Copy link

SkafteNicki commented Mar 7, 2024

Review checklist for @SkafteNicki

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/AtrCheema/SeqMetrics?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@AtrCheema) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@SkafteNicki
Copy link

Going to preface my review by saying that I am the maintainer of Torchmetrics, which is being reference in this paper. The TM team welcomes more libraries in within the field of evaluating machine learning models, as we consider this paramount for the field of machine learning to move forward. Also we do not see SegMetrics to be a library in direct competition as the difference in computational backend (pytorch for TM vs numpy for SegMetrics) makes each package suitable for different researchers.

Overall I am satisfied with the paper as it is now. SegMetrics is a nice software package that can be used to calculate a large range of metric on 1D data. It is therefore narrow in scope, but that also means that it can be great at what it does (it definitely seems faster than torchmetrics for calculating a lot of metric in one go). It has a simple and consistent interface and is easy to use. The paper have a clear problem statement, relevant references and a explanation of the API.

However, my main concern is regarding the robustness of the package which is a large claim from the authors throughout the paper. I have laid out my full review in this issue: AtrCheema/SeqMetrics#3, with proposed changes. There are a few breaking points for me at the moment to recommend this paper being accepted to JOSS.

@FATelarico
Copy link

FATelarico commented Mar 8, 2024

Review checklist for @FATelarico

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/AtrCheema/SeqMetrics?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@AtrCheema) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@FATelarico
Copy link

FATelarico commented Mar 8, 2024

I concur with @SkafteNicki that the submission is almost good to go. However, I have to subscribe to some of the concerns he raised in his review (and the associated issue). Moreover, I have the following, short comments, to make.

Functionality

Installation

As I have encountered significant issue with this sort of flaws in the past, I invite the authors to check the entire software on the newest stable release of Python 3 and update all the main packages (especially numpy, of which most people probably have a much more recent version).

  • Outdated dependendencies

Personally, I had no problems running the programm on Python (downgraded to 3.7) under both Ubuntu 22. But it would not run on Windows 10 (64 bit) with Python 3.12.2 without downgrading. However, compatibility should be verified after the program is fully updated and the relevant information should be appended to the README.md file only then.

  • OS comptibility unclear

Functionality

I am not completely sure this is the right heading under which to put this comment, but it relates to 'claims' the paper makes. In fact, I did not see satisfactory indications of the tests' robustness. @SkafteNicki wrote extensively and better than I could about it

Documentation

Community guidelines

There are ready-made templates to add community guidelines. Consider just copy-pasting them and adapting the content to your desires. For instance: https://bttger.github.io/contributing-gen-web/ which is based on contributing-gen

Consider adding an Installation for contributors heading for quick reference if in agreement with your intended policy.

  • Lack of contributing guidelines

Software paper

State of the field

Content referrable under this point is contained in rows 24-31 of the Statement of need section. I would like the authors to consider shortening these passages and add a separate heading explictly dedicated to comparing their sofware to Keras, scikit-learn, Torchmetrics, forecasting_metrics, hydroeval, and other. They do not need to necessarily consider all of them, but at least the most widely used.

In particular, the paper would benefit from a clear description of (some of) the use-cases in which SeqMetrics is technically preferable to existing alternatives as opposed to application in which its main added value is the GUI. For instance, the emphasis here is clearly on tabular and time-series, one-dimension data. But reading the paper, at times, one may forget it.

  • Shallow comparison

Quality of writing

The language and style satisfy the standards of academic writing. However, I suspend the checking of this box until the paper is complete.

References

Five references seem too few for a paper that should help SeqMetrics stand out in a rather crowded field. Even if abovementioned suggestions to include additional sections are rejected, this issue ought to be settled through a more intenst dialogue with existing tools.

  • Partial engagement with the field

Postface to any review

The present comments are to intended as invitations to realise some edits and motivated rejections can lead to constructive arguments in some cases.

@y1my1
Copy link

y1my1 commented Mar 13, 2024

Review checklist for @y1my1

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/AtrCheema/SeqMetrics?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@AtrCheema) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@mstimberg
Copy link

👋 hi everyone.

Thanks a lot @SkafteNicki and @FATelarico for your reviews (special shoutout to @SkafteNicki for the fastest review I've ever received 😊 )!

@AtrCheema: could you let us know whether you are already working on incorporating the feedback, or are you still waiting for comments from the third reviewer?

@y1my1: could you please give us a rough timeline when you can provide your review?

Thanks again for your help with the review process.

@AtrCheema
Copy link

Hi @mstimberg , we are already working on the comments of @SkafteNicki and @FATelarico . Thanks to both of them for their valuable feedback.

@y1my1
Copy link

y1my1 commented Mar 22, 2024

Overall, this is a good package that may meet some needs of the scientific community. However, I echo most of the comments raised by @SkafteNicki and @FATelarico, especially about the documentation and writing of the paper. @SkafteNicki and @FATelarico have already made great suggestions. These are some just minor issues that may help improve the package.

Documentation

It's great that the authors provide documentation through readthedocs. The authors put a lot of effort there providing important information about the computation of metrics, like the formulas of some metrics. However, for some of the metrics, the authors just provided a reference there; it would be great if the authors could at least provide a formula that helps users understand the underhood of the computation. And it would be beneficial if the authors could write a concise introduction there.

Software paper

The writing of the paper follows the academic standard and is most understandable. However, there is room for improvement to make it easier to read. For example, this sentence seems to be not very well-written

Torchmetrics library, (Detlefsen et al., 2022) although contains 100+ metrics, however, it provides only 48 which are intended for 1-dimensional numerical data.

@mstimberg
Copy link

👋 @AtrCheema could you please give us an update where you are with the changes to address the reviewer comments?

@AtrCheema
Copy link

Hi @mstimberg Thanks for the follow up. We are modifying the code. Some changes have already been pushed while others will soon be pushed (couple of days hopefully). Can you please tell if there is a deadline for the revision?

@mstimberg
Copy link

Hi @AtrCheema, thanks for the update. There is no strict deadline for the revision, but we prefer to not drag it out for too long, since it will be more work for the reviewers to remember what everything was about. If you could provide your updates/replies to the reviewers until the end of next week, that would be great. Please let me know if you need more time than that. Thanks!

@mstimberg
Copy link

👋 @AtrCheema, could you give us an update with regard to the changes addressing reviewer comments?

@AtrCheema
Copy link

@mstimberg Sorry for the delayed response. Actually, I was infected and bedridden for more than a week. I am back at work now, and our response will be complete by the end of this week (Friday). Again apologies for this unexpected delay.

@mstimberg
Copy link

mstimberg commented Apr 23, 2024

Many thanks for getting back to us, @AtrCheema, sorry to hear that you were ill. No worries of course for the delay, looking forward to your update.

@mstimberg
Copy link

👋 @AtrCheema I hope you are doing well. Could you please let us know where you are with respect to the updates?

@AtrCheema
Copy link

@editorialbot generate pdf

@mstimberg We are almost done with the review. I apologize that the review quite some time which was not anticipated at the start.

I would first like to respond to the comments made by @SkafteNicki which are the most comprehensive one. Moreover they are endorsed and overlapped by the other two reviewers.

Moreover, I would like to thank all the three reviewers for taking the time to review the repository in detail. By addressing the comments, we have not only improved the overall quality of the package but also removed some bugs.

Comments by @SkafteNicki

You mention that easy_mpl is needed for plotting the metrics. However, it is not mentioned in the documentation or README that you can actually install this by writing pip install SeqMetrics[all]. Please add these additional install instructions.

Response: We have updated the readme and and documentation to add additional install instructions.

Documentation for the class based API is essentially missing: https://seqmetrics.readthedocs.io/en/latest/rgr.html#SeqMetrics.RegressionMetrics. I know this is because it is just calling the functional API but it would then be great if there was a reference per metric to there functional counterpart.

Response: Initially we thought, adding the same documentation for methods of class based API would involve significant duplication. However, we have now added he documentation for class-based API as well for both regression and classification

Additionally, in the README.md of the project there are multiple related projects mentioned at the bottom that are not included in the paper ( forecasting_metrics, hydroeval etc). I would like to ask the authors why these are not reference in the paper.

Response: The updated paper now contains reference to most of the packages listed in README.md.

On the other hand, all the frameworks mentioned in the paper are not listed in the related section on the README.md. Again, minor stuff.

Response: The updated README.md now contains all the frameworks which are mentioned in the paper.

The app should be better documented, especially for instructions for typing/pasting values. From the code I can see that a comma separated list is expected, but this is not clear from the instructions. A simple numpy array does not work for example. Including fig2 and fig3 to the documentation and README file would definitely help.

Response: The app can be used by typing/pasting data which is either comma separated or space separated. We have updated the instructions on the app. Furthermore, the two figures are also added to README file and to documentation.

Since it is a simple streamlit app that users can deploy themself without too much hassle, I really think the authors should consider adding instructions on how the app can be deployed by users locally (lets say that I do not trust streamlit servers with my data but still want the nice interface). This probably requires a bit of refactoring of the repository to include the app in the src directory and the addition of pip install SegMetrics[app] option for installing. Additionally, the paper should be updated to reflect that the webinterface can be self hosted.

Response: Launching the streamlit app locally requires installing the requirements including the streamlit package and then launching the streamlit app. We have explicitly added these steps in both README and documentation.

Here two different metrics are tested for a single given input. However, it is not clear at all why these tests are actually checking that implementation is correct.

Response: All the unit tests are now run for multiple inputs i.e. small values, large values (>e7), values with nans and -ve values. For all of these cases, the results are compared against a standard/reference. These standard/reference are more elaborated in response to next comment.

(Important) Implement unit testing against other frameworks whenever possible. The authors are already doing this for certain classification metrics (SeqMetrics/tests/test_cls.py Lines 11 to 12 in f1b8858

from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score 
from sklearn.metrics import confusion_matrix, balanced_accuracy_score 

), however it should be done for most metrics. Metrics where there is no reference implementation to compare against should be tested for multiple values and preferably it should be more clear where the reference values comes from.

Response: We have modified the unit tests to include references. Overall, the references and corresponding metrics can be categorized into five groups.

  1. The metrics where an implementation in a standard library is available. This included metrics from sklearn and Torchmetrics. All the metrics in SeqMetrics which are also available in these libraries, are compared with corresponding functions of these libraries.
  2. The metrics for which an implementation is available in other libraries such as HydroErr, NeuralHydrology, skill metrics etc. Since enlisting these libraries in the requirements for tests would increase the number of requirements thereby making future development of SeqMetrics difficult. This is especially for the case where those libraries are no longer maintained. We have, however installed these libraries on a colab notebook, calculated the reference values and then compared the SeqMetrics against these reference values in the tests. The colab notebook is provided in the tests for references.
  3. The metrics whose calculation was too obvious such as std_ratio or gmean_diff. For these metrics, a reference is not provided but their documentation is improved.
  4. The metrics for which reference implementation was not available in a python library/package. However their code is available in the form of stackoverflow answers or github gist. For these metrics, we have copied the code (with reference) in the colab notebook, and calculated the reference value. The tests are then run against these reference values.
  5. The metrics for which no reference implementation was available in python. For these metrics we have provided the references for the formulas or the reference for the implementation in another language.
  6. We also encountered two metrics for which we could not find any reference. We have removed these metrics for the time being.

(Important) Currently only Python 3.7 is tested, which is officially end-of-life. Either run CI that checks multiple versions of Python or at the very least a newer supported version

Response: We are now testing against 3.7 and 3.12 which are the lower and upper python versions supported by this library.

Test only run on ubuntu right now and no other major OS. I recommend that the authors either add test for other OS or explicitly state what OS are supported in there README.md

Response: We are now testing the library on windows, ubuntu and mac with python 3.7 and python 3.12 versions.

(Important) Because the CI only uses Python 3.7 the actual numpy version being tested is numpy-1.21.6, which is around 2 years old at this point. I see this as a overall consequence that the authors have not included some upper/lower bounds on supported numpy/scipy versions in the requirements file.

Response: We are now testing the library for numpy 1.26.4 and 1.17 which are the upper and lower numpy versions supported by the library. The setup and requirement files are also updated to reflect this change.

(Important) Missing community guidelines: are contributions welcome? what should a contribution look like etc.?

Response: We have added a CONTRIBUTING.rst file, highlighting protocol for the potential contributors.

(nitpicking) In fig1 of the paper the authors it is redundant to say "class-based api" both at the bottom and top of the figure (same goes for functional). Only mentioning this once should be enough?

Response: We have modifed the figure 1 by removing the "class-based api" at the bottom.

(nitpicking) The overall resolution (pixel-wise) of figures in the paper is on the lower side and could be increase to help the readability of the text in the figures

Response: We have added the figures with higher resolution (900 dpi).

@AtrCheema
Copy link

@mstimberg Thanks for the prompt response. I have updated the Zenodo archive with correct title and the latest release has updated paper.md file. However on Zenodo itself, I could not add the affiliation of first two authors explicitly, probably because Zenodo relies on ROR to search for organizations and the organization of first two authors is not enlisted in the database of ROR yet.

@mstimberg
Copy link

@AtrCheema Thanks for the changes, but as I mentioned earlier, there was no need to do a new release, since only Zenodo metadata and the paper changed, not the code itself. Now we are in the unfortunate situation where the release is latest and not a version number. Also, the licence needs manual fixing again. I don't know that the easiest solution for you is here – maybe do a v2.0.1 release?
Regarding the affiliations: this is fine. Maybe add a note as suggested by the Zenodo docs: https://help.zenodo.org/docs/deposit/describe-records/descriptions/#note ?

@AtrCheema
Copy link

@mstimberg Can we still not use zenodo 12958902 ? I have added author affiliation as Notes there.

@mstimberg
Copy link

Yes, you are right – I stumbled a bit over the fact that on Zenodo it is now displayed as not being the latest version, but this shouldn't be an issue. I'll have a final look over the manuscript and then will hand things over to a topic editor 👍

@mstimberg
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@mstimberg
Copy link

@AtrCheema I had a last final look over the paper and noticed a few minor issues. Please have a look at AtrCheema/SeqMetrics#5 and merge if you agree. After that I will recommend acceptance and hand over to the topic editor. Thanks for your patience!

@mstimberg
Copy link

@editorialbot recommend-accept

I'm handing this off to the topic editor now – many thanks again to everyone involved!

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1029/2007JD008972 is OK
- 10.28945/4184 is OK
- 10.21105/joss.04101 is OK
- 10.21105/joss.04050 is OK
- 10.5281/zenodo.2591217 is OK
- 10.3390/hydrology5040066 is OK
- 10.1145/3377811.3380426 is OK
- 10.1145/3460319.3464797 is OK
- 10.48550/arXiv.1912.01703 is OK

MISSING DOIs

- 10.1163/2214-8647_dnp_e612900 may be a valid DOI for title: Keras
- No DOI given, and none found for title: Scikit-learn: Machine learning in Python

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/csism-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5695, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Jul 29, 2024
@danielskatz
Copy link

@AtrCheema - As the track editor, I'll next check and proofread this, and let you know what else, if anything, is needed.

@danielskatz
Copy link

@AtrCheema - I'm suggesting small changes in AtrCheema/SeqMetrics#6. Please merge this, or let me know what you disagree with, then we can proceed. Also, I notice that there is no acknowledgements section, so I want to confirm that you do not have any funding sources or other things that should be mentioned in such a section.

@AtrCheema
Copy link

@danielskatz Thanks for the suggestions. I have merged the PR. We do not have any funding source or any such thing to be added in "acknowledgements" section.

@danielskatz
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1029/2007JD008972 is OK
- 10.28945/4184 is OK
- 10.21105/joss.04101 is OK
- 10.21105/joss.04050 is OK
- 10.5281/zenodo.2591217 is OK
- 10.3390/hydrology5040066 is OK
- 10.1145/3377811.3380426 is OK
- 10.1145/3460319.3464797 is OK
- 10.48550/arXiv.1912.01703 is OK

MISSING DOIs

- 10.1163/2214-8647_dnp_e612900 may be a valid DOI for title: Keras
- No DOI given, and none found for title: Scikit-learn: Machine learning in Python

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/csism-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5707, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@danielskatz
Copy link

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Rubab
  given-names: Fazila
  orcid: "https://orcid.org/0009-0004-9040-3459"
- family-names: Iftikhar
  given-names: Sara
  orcid: "https://orcid.org/0000-0001-7446-6805"
- family-names: Abbas
  given-names: Ather
  orcid: "https://orcid.org/0000-0002-0031-745X"
doi: 10.5281/zenodo.12958902
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Rubab
    given-names: Fazila
    orcid: "https://orcid.org/0009-0004-9040-3459"
  - family-names: Iftikhar
    given-names: Sara
    orcid: "https://orcid.org/0000-0001-7446-6805"
  - family-names: Abbas
    given-names: Ather
    orcid: "https://orcid.org/0000-0002-0031-745X"
  date-published: 2024-07-30
  doi: 10.21105/joss.06450
  issn: 2475-9066
  issue: 99
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 6450
  title: "SeqMetrics: a unified library for performance metrics
    calculation in Python"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.06450"
  volume: 9
title: "SeqMetrics: a unified library for performance metrics
  calculation in Python"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.06450 joss-papers#5709
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.06450
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Jul 30, 2024
@danielskatz
Copy link

Congratulations to @AtrCheema (Ather Abbas) and co-authors on your publication!!

And thanks to @FATelarico, @y1my1 and @SkafteNicki for reviewing, and to @mstimberg for editing!
JOSS depends on volunteers and couldn't be successful without you

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.06450/status.svg)](https://doi.org/10.21105/joss.06450)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.06450">
  <img src="https://joss.theoj.org/papers/10.21105/joss.06450/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.06450/status.svg
   :target: https://doi.org/10.21105/joss.06450

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 7 (CSISM) Computer science, Information Science, and Mathematics
Projects
None yet
Development

No branches or pull requests

7 participants