Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: cblearn: Comparison-based Machine Learning in Python #6139

Closed
editorialbot opened this issue Dec 11, 2023 · 97 comments
Closed

[REVIEW]: cblearn: Comparison-based Machine Learning in Python #6139

editorialbot opened this issue Dec 11, 2023 · 97 comments
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Dec 11, 2023

Submitting author: @dekuenstle (David-Elias Künstle)
Repository: https://github.com/cblearn/cblearn
Branch with paper.md (empty if default branch): joss
Version: 0.3.0
Editor: @mbarzegary
Reviewers: @haniyeka, @sherbold, @stsievert
Archive: 10.5281/zenodo.11410206

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/9b9ce7d05cd840818e6626229a17c39f"><img src="https://joss.theoj.org/papers/9b9ce7d05cd840818e6626229a17c39f/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/9b9ce7d05cd840818e6626229a17c39f/status.svg)](https://joss.theoj.org/papers/9b9ce7d05cd840818e6626229a17c39f)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@haniyeka & @sherbold & @stsievert, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mbarzegary know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @haniyeka

📝 Checklist for @sherbold

📝 Checklist for @stsievert

@editorialbot editorialbot added Python review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning waitlisted Submissions in the JOSS backlog due to reduced service mode. labels Dec 11, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.10 s (865.0 files/s, 96390.0 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          59           1413           2777           3771
TeX                              1             32              2            372
reStructuredText                14            213            166            351
Markdown                         4             77              0            246
YAML                             5             14             25            163
DOS Batch                        1              8              1             26
TOML                             1              1              0             11
make                             1              4              7              9
INI                              1              0              0              6
-------------------------------------------------------------------------------
SUM:                            87           1762           2978           4955
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 1277

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.48550/arXiv.1912.01666 is OK
- 10.1167/jov.22.14.3985 is OK
- 10.1167/jov.22.13.5 is OK
- 10.1167/jov.22.14.3232 is OK
- 10.1109/MLSP.2012.6349720 is OK
- 10.1167/3.8.5 is OK
- 10.1145/1559755.1559760 is OK
- 10.1167/17.1.37 is OK
- 10.1167/jov.20.4.19 is OK
- 10.1167/jov.20.9.14 is OK
- 10.1167/12.3.19 is OK
- 10.1038/s41562-020-00951-3 is OK
- 10.3758/s13428-019-01285-3 is OK
- 10.1109/TVCG.2014.2346978 is OK
- 10.1145/3380741 is OK
- 10.48550/arXiv.1309.0238 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.21105/joss.04517 is OK
- 10.48550/arXiv.1511.02254 is OK
- 10.1167/jov.23.9.5388 is OK
- 10.48550/arXiv.2211.16459 is OK

MISSING DOIs

- 10.1109/cvpr46437.2021.00355 may be a valid DOI for title: Enriching ImageNet with Human Similarity Judgments and Psychological Embeddings
- 10.1609/hcomp.v1i1.13079 may be a valid DOI for title: The crowd-median algorithm
- 10.1109/icassp.2018.8461868 may be a valid DOI for title: The landscape of non-convex quadratic feasibility

INVALID DOIs

- None

@mbarzegary
Copy link

👋🏼 @haniyeka @sherbold @stsievert this is the review thread for the paper. All of our communications will happen here from now on.

As a reviewer, the first step is to create a checklist for your review by entering

@editorialbot generate my checklist

as the top of a new comment in this thread.

These checklists contain the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. The first comment in this thread also contains links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#REVIEW_NUMBER so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread.

We aim for reviews to be completed within about 2-4 weeks. Please feel free to ping me (@mbarzegary) if you have any questions/concerns.

@mbarzegary
Copy link

@dekuenstle this is where the review takes place. Please keep an eye out for comments here from the reviewers, as well as any issues opened by them on your software repository. I recommend you aim to respond to these as soon as possible, and you can address them straight away as they come in if you like, to ensure we do not loose track of the reviewers.

First of all, please fix the missing DOIs issue mentioned above.

@dekuenstle
Copy link

@mbarzegary
Thank you for taking care of my paper as the editor. I have added the missing DOI and am looking forward to the reviews.

@haniyeka
Copy link

haniyeka commented Dec 11, 2023

Review checklist for @haniyeka

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/cblearn/cblearn?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@dekuenstle) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@sherbold
Copy link

sherbold commented Dec 12, 2023

Review checklist for @sherbold

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/cblearn/cblearn?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@dekuenstle) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@mbarzegary
Copy link

Hi @haniyeka and @stsievert
how is your review going?

@mbarzegary
Copy link

Hi @sherbold,
Thank you for the review. I see you have opened a couple of issues on the software repo. It could be nice if you mention the review thread in those issues so that they appear here too.

@haniyeka
Copy link

haniyeka commented Jan 12, 2024

I've had a close look at the cblearn package and how it fits into the Python ML ecosystem. Overall, the package is a valuable addition to the community, putting together comparison-based algorithms into a useful toolkit. I appreciate the efforts put into this and recommend its acceptance. But, I would like to highlight a few areas for improvement to maximize its potential:

  • Partial Compatibility with Scikit-learn estimators:
    When I checked the test.yml workflow, it seems that 90 out of 301 tests are being skipped because cblearn's ordinal embedding estimators are not fully compatible with scikit-learn estimators. I'm not sure if this is something that can be addressed or not. The main issue here comes from cblearn’s approach to handle triplet comparisons as inputs, and this is different from the input data expected by scikit-learn’s estimators. This raises a bit of concerns about the generalizability of these estimators.
    test.log

  • Installation and Documentation:
    When I followed the installation instructions, I couldn't run the examples as the h5py package was missing. I would like to suggest the inclusion of an environment.yml/env.yml file in the cblearn package repository. This would improve the management of dependencies, make the setup process smoother.

  • Certain Examples doesn't work:
    I wasnt able to run the ordinal_embedding.ipynb and triplet_formats.ipynb examples as mentioned in the issues. (ordinal_embedding.ipynb doesn't work cblearn/cblearn#68 and triplet_formats.py example does not work cblearn/cblearn#65)

  • GPU Computation Feature Documentation:
    Although the package uses Torch for GPU computation, I couldn't find any information in the documentation about how users can enable this feature. This would be a nice addition to have.

@dekuenstle
Copy link

@sherbold @haniyeka Thanks a lot for the reviews and the issues raised!
I will address them within the next week.

@stsievert
Copy link

stsievert commented Jan 15, 2024

@mbarzegary I'll have my review finished this week. @dekuenstle my apologies for the delay.

@stsievert
Copy link

stsievert commented Jan 16, 2024

Review checklist for @stsievert

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/cblearn/cblearn?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@dekuenstle) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@stsievert
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@dekuenstle
Copy link

dekuenstle commented Jan 16, 2024

@sherbold @haniyeka
Thank you for your work on the detailed feedback in this thread and the issues.
I am happy you wrote my submission is a valuable contribution to the community.
Your reviews have helped me to improve the documentation significantly.
Please find my point-by-point response to your issues below.

@stsievert, please take your time for the review, and don't feel pressured by this.
I'm getting ahead of myself and explaining the changes based on the previous reviews so that you don't have to deal with some issues again (e.g., the broken example). You can find the original submission in the joss branch. The changes here are in a separate revision branch, forked from the main.

  • Scikit-learn compatibility.

    It is true that cblearn skips many of sklearn's estimator tests.
    However, this is less due to the incompatibility of the estimator with the
    API than to the fact that sklearn generates artificial data in the tests incompatible with comparison data.
    I extended both the User Guide and the Contributor Guide accordingly:

    All estimators in this library are compatible with the scikit-learn API and can be used in scikit-learn pipelines if comparisons are represented in the array format. The scikit-learn compatibility is achieved by implementing the fit, predict, and score methods of the BaseEstimator class. [...]

    scikit-learn provides a test suite that should ensure the compatibility of estimators. We use this test suite to test our estimators, too, but have to skip some tests because they use artificial data incompatible to comparison data. Typically, cblearn estimators are compatible with scikit-learn estimators if comparisons are represented as numpy arrays. From an API perspective, comparison arrays look like discrete features and class labels; however, not all discrete features and class labels are valid comparisons.[...]

  • Extra dependencies (h5py, r2py/wrapper, ...)

    r_wrapper installation instructions missing cblearn/cblearn#67

    The most important dependencies of cblearn are installed by pip. However, more specialized functions require very large packages (pytorch) or, depending on the platform, require additional packages outside of Python (e.g., R interpreter).
    The extra dependencies required must be installed explicitly not to jeopardize platform compatibility.
    I do not want to bind the user to, for example, conda and therefore do not provide environment.yml. I know that we are entering into a tradeoff of user-friendliness here and have extended the installation instructions.

  • Certain Examples don't work

    ordinal_embedding.ipynb doesn't work cblearn/cblearn#68
    triplet_formats.py example does not work cblearn/cblearn#65
    r_wrapper installation instructions missing cblearn/cblearn#67

    I apologize for the inconvenience and have fixed the problems.
    To avoid such errors occurring again, the examples are now executed when building the documentation in the CD
    (diff).

  • Contributor install broken

    Contributor installation instructions broken cblearn/cblearn#66

    I fixed the typo.

  • GPU Computation Feature Documentation

    I added a section to the user guide that explains how and when to use the feature.

You can find the diff here:
cblearn/cblearn#69
And the updated documentation is here:
https://cblearn.readthedocs.io/en/revision/

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.11410206

@dekuenstle
Copy link

@mbarzegary done!

@mbarzegary
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41598-024-54368-3 is OK
- 10.31234/osf.io/c42yr is OK
- 10.21105/joss.04517 is OK
- 10.48550/arXiv.1912.01666 is OK
- 10.1167/jov.22.14.3985 is OK
- 10.1167/jov.22.13.5 is OK
- 10.1167/jov.22.14.3232 is OK
- 10.1109/cvpr46437.2021.00355 is OK
- 10.1109/MLSP.2012.6349720 is OK
- 10.1167/3.8.5 is OK
- 10.1145/1559755.1559760 is OK
- 10.1167/17.1.37 is OK
- 10.1167/jov.20.4.19 is OK
- 10.1167/jov.20.9.14 is OK
- 10.1167/12.3.19 is OK
- 10.1145/3620665.3640366 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41562-020-00951-3 is OK
- 10.3758/s13428-019-01285-3 is OK
- 10.1109/TVCG.2014.2346978 is OK
- 10.1145/3380741 is OK
- 10.48550/arXiv.1309.0238 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1609/hcomp.v1i1.13079 is OK
- 10.1109/icassp.2018.8461868 is OK
- 10.21105/joss.04517 is OK
- 10.48550/arXiv.1511.02254 is OK
- 10.1167/jov.23.9.5388 is OK
- 10.48550/arXiv.2211.16459 is OK

MISSING DOIs

- No DOI given, and none found for title: metric-learn: Metric Learning Algorithms in Python
- No DOI given, and none found for title: NEXT: A System for Real-World Development, Evaluat...
- No DOI given, and none found for title: Finite Sample Prediction and Recovery Bounds for O...
- No DOI given, and none found for title: Generalized Non-metric Multidimensional Scaling
- No DOI given, and none found for title: Local ordinal embedding
- No DOI given, and none found for title: Adam: A Method for Stochastic Optimization
- 10.1609/hcomp.v1i1.13079 may be a valid DOI for title: The crowd-median algorithm
- No DOI given, and none found for title: Pytorch: An imperative style, high-performance dee...
- No DOI given, and none found for title: Adaptively learning the crowd kernel
- No DOI given, and none found for title: Comparison-Based Random Forests
- No DOI given, and none found for title: Scikit-learn: Machine Learning in Python
- No DOI given, and none found for title: Foundations of Comparison-Based Hierarchical Clust...
- No DOI given, and none found for title: Near-optimal comparison based clustering
- No DOI given, and none found for title: Multiview Triplet Embedding: Learning Attributes i...
- No DOI given, and none found for title: Learning combinatorial functions from pairwise com...
- No DOI given, and none found for title: Scaling up ordinal embedding: A landmark approach
- No DOI given, and none found for title: Landmark Ordinal Embedding
- No DOI given, and none found for title: NEXT: A System for Real-World Development, Evaluat...

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

⚠️ Error preparing paper acceptance. The generated XML metadata file is invalid.

ID ref-harris_array_2020 already defined
ID ref-heikinheimo2013crowd already defined

@mbarzegary
Copy link

@dekuenstle you have duplicate entries for two references (harris_array_2020 and heikinheimo2013crowd). please remove the redundant ones so that we can proceed with the acceptance procedure. also, for the latter, 10.1609/hcomp.v1i1.13079 seems to be a valid DOI. please check this too.

@dekuenstle
Copy link

@mbarzegary I removed the duplicates. One of the heikinheimo2013crowd entries already contained the DOI.

@mbarzegary
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41598-024-54368-3 is OK
- 10.31234/osf.io/c42yr is OK
- 10.21105/joss.04517 is OK
- 10.48550/arXiv.1912.01666 is OK
- 10.1167/jov.22.14.3985 is OK
- 10.1167/jov.22.13.5 is OK
- 10.1167/jov.22.14.3232 is OK
- 10.1109/cvpr46437.2021.00355 is OK
- 10.1109/MLSP.2012.6349720 is OK
- 10.1167/3.8.5 is OK
- 10.1145/1559755.1559760 is OK
- 10.1167/17.1.37 is OK
- 10.1167/jov.20.4.19 is OK
- 10.1167/jov.20.9.14 is OK
- 10.1167/12.3.19 is OK
- 10.1145/3620665.3640366 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41562-020-00951-3 is OK
- 10.3758/s13428-019-01285-3 is OK
- 10.1109/TVCG.2014.2346978 is OK
- 10.1145/3380741 is OK
- 10.48550/arXiv.1309.0238 is OK
- 10.1609/hcomp.v1i1.13079 is OK
- 10.1109/icassp.2018.8461868 is OK
- 10.21105/joss.04517 is OK
- 10.48550/arXiv.1511.02254 is OK
- 10.1167/jov.23.9.5388 is OK
- 10.48550/arXiv.2211.16459 is OK

MISSING DOIs

- No DOI given, and none found for title: metric-learn: Metric Learning Algorithms in Python
- No DOI given, and none found for title: NEXT: A System for Real-World Development, Evaluat...
- No DOI given, and none found for title: Finite Sample Prediction and Recovery Bounds for O...
- No DOI given, and none found for title: Generalized Non-metric Multidimensional Scaling
- No DOI given, and none found for title: Local ordinal embedding
- No DOI given, and none found for title: Adam: A Method for Stochastic Optimization
- No DOI given, and none found for title: Pytorch: An imperative style, high-performance dee...
- No DOI given, and none found for title: Adaptively learning the crowd kernel
- No DOI given, and none found for title: Comparison-Based Random Forests
- No DOI given, and none found for title: Scikit-learn: Machine Learning in Python
- No DOI given, and none found for title: Foundations of Comparison-Based Hierarchical Clust...
- No DOI given, and none found for title: Near-optimal comparison based clustering
- No DOI given, and none found for title: Multiview Triplet Embedding: Learning Attributes i...
- No DOI given, and none found for title: Learning combinatorial functions from pairwise com...
- No DOI given, and none found for title: Scaling up ordinal embedding: A landmark approach
- No DOI given, and none found for title: Landmark Ordinal Embedding
- No DOI given, and none found for title: NEXT: A System for Real-World Development, Evaluat...

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5491, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Jun 12, 2024
@crvernon
Copy link

crvernon commented Jun 12, 2024

🔍 checking out the following:

  • reviewer checklists are completed or addressed
  • version set
  • archive set
  • archive names (including order) and title in archive matches those specified in the paper
  • archive uses the same license as the repo and is OSI approved as open source
  • archive DOI and version match or redirect to those set by editor in review thread
  • paper is error free - grammar and typos
  • paper is error free - test links in the paper and bib
  • paper is error free - refs preserve capitalization where necessary
  • paper is error free - no invalid refs without justification

@crvernon
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@crvernon
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41598-024-54368-3 is OK
- 10.31234/osf.io/c42yr is OK
- 10.21105/joss.04517 is OK
- 10.48550/arXiv.1912.01666 is OK
- 10.1167/jov.22.14.3985 is OK
- 10.1167/jov.22.13.5 is OK
- 10.1167/jov.22.14.3232 is OK
- 10.1109/cvpr46437.2021.00355 is OK
- 10.1109/MLSP.2012.6349720 is OK
- 10.1167/3.8.5 is OK
- 10.1145/1559755.1559760 is OK
- 10.1167/17.1.37 is OK
- 10.1167/jov.20.4.19 is OK
- 10.1167/jov.20.9.14 is OK
- 10.1167/12.3.19 is OK
- 10.1145/3620665.3640366 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41562-020-00951-3 is OK
- 10.3758/s13428-019-01285-3 is OK
- 10.1109/TVCG.2014.2346978 is OK
- 10.1145/3380741 is OK
- 10.48550/arXiv.1309.0238 is OK
- 10.1609/hcomp.v1i1.13079 is OK
- 10.1109/icassp.2018.8461868 is OK
- 10.21105/joss.04517 is OK
- 10.48550/arXiv.1511.02254 is OK
- 10.1167/jov.23.9.5388 is OK
- 10.48550/arXiv.2211.16459 is OK

MISSING DOIs

- No DOI given, and none found for title: metric-learn: Metric Learning Algorithms in Python
- No DOI given, and none found for title: NEXT: A System for Real-World Development, Evaluat...
- No DOI given, and none found for title: Finite Sample Prediction and Recovery Bounds for O...
- No DOI given, and none found for title: Generalized Non-metric Multidimensional Scaling
- No DOI given, and none found for title: Local ordinal embedding
- No DOI given, and none found for title: Adam: A Method for Stochastic Optimization
- No DOI given, and none found for title: Pytorch: An imperative style, high-performance dee...
- No DOI given, and none found for title: Adaptively learning the crowd kernel
- No DOI given, and none found for title: Comparison-Based Random Forests
- No DOI given, and none found for title: Scikit-learn: Machine Learning in Python
- No DOI given, and none found for title: Foundations of Comparison-Based Hierarchical Clust...
- No DOI given, and none found for title: Near-optimal comparison based clustering
- No DOI given, and none found for title: Multiview Triplet Embedding: Learning Attributes i...
- No DOI given, and none found for title: Learning combinatorial functions from pairwise com...
- No DOI given, and none found for title: Scaling up ordinal embedding: A landmark approach
- No DOI given, and none found for title: Landmark Ordinal Embedding
- No DOI given, and none found for title: NEXT: A System for Real-World Development, Evaluat...

INVALID DOIs

- None

@crvernon
Copy link

Nice work on this one @dekuenstle, very clean.

@crvernon
Copy link

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Künstle
  given-names: David-Elias
  orcid: "https://orcid.org/0000-0001-5507-3731"
- family-names: Luxburg
  given-names: Ulrike
  name-particle: von
contact:
- family-names: Künstle
  given-names: David-Elias
  orcid: "https://orcid.org/0000-0001-5507-3731"
doi: 10.5281/zenodo.11410206
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Künstle
    given-names: David-Elias
    orcid: "https://orcid.org/0000-0001-5507-3731"
  - family-names: Luxburg
    given-names: Ulrike
    name-particle: von
  date-published: 2024-06-12
  doi: 10.21105/joss.06139
  issn: 2475-9066
  issue: 98
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 6139
  title: "cblearn: Comparison-based Machine Learning in Python"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.06139"
  volume: 9
title: "cblearn: Comparison-based Machine Learning in Python"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.06139 joss-papers#5492
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.06139
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Jun 12, 2024
@crvernon
Copy link

🥳 Congratulations on your new publication @dekuenstle! Many thanks to @mbarzegary for editing and @haniyeka, @sherbold, and @stsievert for your time, hard work, and expertise!! JOSS wouldn't be able to function nor succeed without your efforts.

Please consider becoming a reviewer for JOSS if you are not already: https://reviewers.joss.theoj.org/join

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.06139/status.svg)](https://doi.org/10.21105/joss.06139)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.06139">
  <img src="https://joss.theoj.org/papers/10.21105/joss.06139/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.06139/status.svg
   :target: https://doi.org/10.21105/joss.06139

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@dekuenstle
Copy link

@crvernon Thank you a lot for finalizing & publishing!

I would also like to thank @mbarzegary, @haniyeka, @sherbold, and @stsievert from my side. Your review process has really improved the paper, code, and documentation significantly!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

8 participants