Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Clustergram: Visualization and diagnostics for cluster analysis #5240

Closed
editorialbot opened this issue Mar 10, 2023 · 72 comments
Closed
Assignees
Labels
accepted Jupyter Notebook published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Mar 10, 2023

Submitting author: @martinfleis (Martin Fleischmann)
Repository: https://github.com/martinfleis/clustergram
Branch with paper.md (empty if default branch):
Version: v0.8.0
Editor: @csoneson
Reviewers: @csadorf, @gagolews
Archive: 10.5281/zenodo.8202396

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/f3bab5d7bbdf2ed70dbea435a616ad18"><img src="https://joss.theoj.org/papers/f3bab5d7bbdf2ed70dbea435a616ad18/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/f3bab5d7bbdf2ed70dbea435a616ad18/status.svg)](https://joss.theoj.org/papers/f3bab5d7bbdf2ed70dbea435a616ad18)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@csadorf & @gagolews, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @csoneson know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @csadorf

📝 Checklist for @gagolews

@editorialbot editorialbot added Jupyter Notebook Python review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning labels Mar 10, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.22 s (149.4 files/s, 73588.6 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
SVG                              3              0             63          10499
Python                           4            351            441           1277
Markdown                         7            225              0            702
TeX                              1             13              0            200
Jupyter Notebook                 5              0           1930            179
YAML                             8             21              0            173
TOML                             1              7              0             49
CSS                              1             15              0             48
DOS Batch                        1              8              1             26
make                             1              4              7              9
reStructuredText                 1              4              4              2
-------------------------------------------------------------------------------
SUM:                            33            648           2446          13164
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 1361

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10/ggj45f is OK
- 10.1016/j.habitatint.2022.102641 is OK
- 10.1038/s41597-022-01640-8 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1158/1538-7445.AM2022-5038 is OK
- 10.1016/j.dib.2022.108335 is OK
- 10/ghh97z is OK
- 10.1007/BF02915278 is OK
- 10.1016/j.compenvurbsys.2022.101802 is OK
- 10.1115/IPC2022-87145 is OK
- 10.1016/j.jag.2022.102911 is OK
- 10.5281/zenodo.3960218 is OK
- 10.1007/s12061-022-09490-y is OK

MISSING DOIs

- 10.3390/info11040193 may be a valid DOI for title: Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence

INVALID DOIs

- None

@csoneson
Copy link
Member

👋🏼 @martinfleis, @csadorf, @gagolews - this is the review thread for the submission. All of our communications will happen here from now on.

Please check the post at the top of the issue for instructions on how to generate your own review checklist. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues directly in the software repository. If you do so, please mention this thread so that a link is created (and I can keep an eye on what is happening). Please also feel free to comment and ask questions in this thread. It is often easier to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

Please feel free to ping me (@csoneson) if you have any questions or concerns. Thanks!

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@gagolews
Copy link

I'll complete the review in 2-3 weeks.

@gagolews
Copy link

gagolews commented Mar 27, 2023

Review checklist for @gagolews

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/martinfleis/clustergram?

  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?

  • Contribution and authorship: Has the submitting author (@martinfleis) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

    I have some doubts about this, see below...

  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.

  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.

  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

Please refer to my remarks below.

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided? This could be written better, see my remarks below
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work? This could be written better, see my remarks below
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Comments

An interesting contribution, but with a "but".

Clustergrams are rather simple – merely a dozen or so lines of plotting code https://github.com/martinfleis/clustergram/blob/main/clustergram/clustergram.py – therefore the scope of the package is quite limited... I am unsure if it is not a "missed opportunity" for something more substantial.

I wonder if the Python community would benefit more from the current author's contribution to the seaborn package, for example?

Or adding more diagnostic tools for clustering so that its scope is not limited to a single visualisation type?

It seems that there is an implementation of the clustergrams in the EcotoneFinder package for R, see <https://cran.r-project.org/web/packages/EcotoneFinder/index.html >, this probably should be mentioned as well. – in the form of a single function within a much larger package.

According to this journal's scope and submission requirements, I read that "Minor 'utility' packages, including 'thin' API clients, and single-function packages are not acceptable." – I am not completely convinced that this is exactly the case here...

Other remarks:

  • p. 1: "research often tries to..." - a personification.

  • p. 1: "the algorithm itself will not determine the optimal number of clusters" - internal cluster validity measures aim to do that (discuss briefly), but also see my critique in https://doi.org/10.1016/j.ins.2021.10.004.

  • p. 1: "silhouette analysis" – critique in the aforementioned paper.

  • p. 1: "the number of seed locations needs to be defied by a researcher and is usually unknown" – I am afraid it does not work precisely that way.

    "The" k-means algorithm is a heuristic to find a (local) minimum of a specific objective function (the within-cluster sum of squares, WCSS).
    By restarting it from multiple initial solutions, we increase the likelihood of pinpointing the true (global) minimum (although no guarantee exists).
    This is why we usually set the number of restarts to 10–100 and choose the one that corresponds to the smallest WCSS.

    I recommend that the "seed selection" problem not be discussed here.

  • p. 1: "In figures Figure 1 and Figure 2" (same on p. 2)

  • p. 2: Figure 1 - fractional number of clusters on the x-axis label looks odd.

  • p. 2: "mean of means" – componentwise arithmetic mean or centroids

  • p. 2: "that does not necessarily provide the best overview of the behavior" – ??

  • p. 2: "there is another option weighted the means" – ??

  • p. 3: Silhouette score, C-H index, DB-index – how about other metrics?

  • p. 4: "Since the first release of clustergram the package was used" – is
    such a self-promotion necessary in a research paper?

@martinfleis
Copy link

Thank you for the comments @gagolews!

I am fully aware that the size of the submission is borderline and I was assuming it will undergo a scope query prior actual review. I am copying below the note I made in the submission regarding this.

It is not exceeding the scope limits by a lot but based on my experience, this should pass the possible query-scope process if you decide it needs to go through it. It is being used in papers as is even by people I've never heard before if fields far from mine so I believe that also confirms the scientific benefits (all cited in the paper).

So I am happy if @csoneson requests a scope check from the editorial team here.

As said above, I believe this is borderline on the side of large enough, but I may be wrong.

Clustergrams are rather simple – merely a dozen or so lines of plotting code https://github.com/martinfleis/clustergram/blob/main/clustergram/clustergram.py – therefore the scope of the package is quite limited

I believe that the whole package should be considered when assessing the scope, not only the plotting part. It is not that straightforward to create the data that needs to be plotted from various clustering engines. I think that this processing part is equally valuable as the plotting segments (either static or interactive).

I wonder if the Python community would benefit more from the current author's contribution to the seaborn package, for example?

You'll probably agree that seaborn is not the ideal candidate. I'd be happy to contribute the code elsewhere but I haven't found a good home for it. Hence it lives alone. If someone has good suggestions, I am all ears. It would certainly be better than maintaining this as a small package on its own.

I am unsure if it is not a "missed opportunity" for something more substantial. ... Or adding more diagnostic tools for clustering so that its scope is not limited to a single visualisation type?

As much as I'd love to, my time I can spend on this is fairly limited, so if the current size of the package does not fulfil the scope limits, it will just not be published.

It seems that there is an implementation of the clustergrams in the EcotoneFinder

Thanks, I wasn't aware of it. It is a bit tough to assess it though, I haven't found any documentation or repository apart from what is on CRAN.

is such a self-promotion necessary in a research paper?

One of the points on evaluation of substantive scholarly effort is Whether the software has already been cited in academic papers.,. This shows an evidence for that point.

@gagolews
Copy link

@martinfleis I'm not going to be too stubborn, the package itself is quite nice. Let's see what others say about it...

@csadorf
Copy link

csadorf commented Mar 31, 2023

I am sorry, but I was a bit swamped the past two weeks, I plan on doing the review mid next week.

@csoneson
Copy link
Member

👋🏻 @csadorf - just wanted to check whether you had time to take a first look at this submission. Thanks!

@csoneson
Copy link
Member

Ping @csadorf

👋🏻 @csadorf - just wanted to check whether you had time to take a first look at this submission. Thanks!

@csadorf
Copy link

csadorf commented May 13, 2023

@csoneson Very sorry about the delay, I'll have a look at it this week-end.

@csoneson
Copy link
Member

@csadorf - no worries, thank you!

@csadorf
Copy link

csadorf commented May 23, 2023

Review checklist for @csadorf

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/martinfleis/clustergram?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@martinfleis) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@csoneson
Copy link
Member

csoneson commented Jun 1, 2023

👋🏻 @csadorf - could you provide an update on the progress of your review? Thanks!

@csadorf
Copy link

csadorf commented Jun 11, 2023

The paper is very well written and illustrated, and the references are almost complete. The code is professionally developed, very well documented, tested, and distributed. I think the paper reaches the level of scholarly effort required for a publication in JOSS, however – as acknowledged by the author themself – just barely, because the majority of the code are thin wrappers around packages that readily provide clustering algorithms (notably sklearn, scipy, and RAPIDS cuML) whereas the core function of the package is to generate plots (the clustergrams), which is achieved in a handful of functions and very few lines of code. It is the substantial amount of testing and documentation that is really providing the net value of this package.

The author claims to embrace scikit-learn’s API style, however that is unfortunately not entirely achieved in my view. While the main class does provide a estimator-like interface, accepting hyper parameters as constructor arguments and offering a fit() function, other parts of the scikit-learn API guide are ignored, such as avoiding the modification of hyper parameters within the constructor, storing fitted parameters with an underscore suffix, and allowing for easy composability. Other plotting-classes within the scikit-learn package typically implement “from_estimator” and “from_predictions” class methods (e.g. ConfusionMatrixDisplay, PredictionErrorDisplay, etc.). In this way it is easy to use the class with any class that would provide clustering data. The current implementation of the class provides from_data() and from_centers() methods, which achieve similar, but not directly compatible behavior.

Since the author is explicitly referencing scikit-learn’s API style, I would suggest to cite Buitinck et al. (arXiv:1309.0238 [cs.LG]) as recommended by the scikit-learn's citation guide.

As a bit of a minor point, however since plotting is the core function of this package I think it warrants mentioning, I would recommend that the ticks on the x-axis on all clustergram diagrams are ensured to be natural numbers since there are no fractions of numbers of clusters.

There are no instructions on how to contribute to the package.

I would recommend that the author adopt some of the recommendations on making the class more modular and independent, consistently follow the scikit-learn API guidelines, and then to consider pursuing a contribution to the scikit-learn package.

I recommend the paper for acceptance with minor revisions, because despite the scholarly effort being marginal, I believe it provides benefit to the community and would simplify reference in future publications.

@martinfleis
Copy link

The author claims to embrace scikit-learn’s API style, however that is unfortunately not entirely achieved in my view.

Thanks for the comment! I'll have a deeper dive to see if the API can follow the style more closely. I suppose the issue will be in the difference between sklearn's estimators producing clustering labels for a fixed k while clustergram requires a range of results but let me see what can be done here.

is explicitly referencing scikit-learn’s API style, I would suggest to cite Buitinck et al.

Will do, thanks!

I would recommend that the ticks on the x-axis on all clustergram diagrams are ensured to be natural numbers

Good point. Will fix.

There are no instructions on how to contribute to the package.

Contributing guide is available in the documentation: https://clustergram.readthedocs.io/en/stable/contributing.html I can eventually copy its content to CONTRIBNUTING.md and store it at the root of the repo but I didn't think the duplication is necessary here.

Thanks you for your comment!

@martinfleis
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Done! version is now v0.8.0

@csoneson
Copy link
Member

@martinfleis I have gone through the paper and it looks good - one reference (Pedregosa et al) is missing a DOI but I don't think it actually has one. Before I recommend acceptance, could you please update the title and author list of the Zenodo archive to match those of the paper, and also specify the license in the Zenodo archive?

@martinfleis
Copy link

@csadorf Thanks! I have updated Zenodo as requested. DOI of the latest fixed versions is 10.5281/zenodo.8202396.

one reference (Pedregosa et al) is missing a DOI but I don't think it actually has one

That was my understanding as well.

All tasks from the author checklist above (checks etc) have also been done.

@csoneson
Copy link
Member

@editorialbot set 10.5281/zenodo.8202396 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.8202396

@csoneson
Copy link
Member

Looks good! I'm handing over to the associate EiC for the last steps.

@csoneson
Copy link
Member

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10/ggj45f is OK
- 10.1016/j.habitatint.2022.102641 is OK
- 10.1038/s41597-022-01640-8 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1158/1538-7445.AM2022-5038 is OK
- 10.1016/j.dib.2022.108335 is OK
- 10/ghh97z is OK
- 10.1007/BF02915278 is OK
- 10.1016/j.compenvurbsys.2022.101802 is OK
- 10.1115/IPC2022-87145 is OK
- 10.1016/j.jag.2022.102911 is OK
- 10.3390/info11040193 is OK
- 10.5281/zenodo.3960218 is OK
- 10.1007/s12061-022-09490-y is OK
- 10.1016/j.ins.2021.10.004 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

⚠️ Error preparing paper acceptance. The generated XML metadata file is invalid.

Element doi: [facet 'pattern'] The value '10/ggj45f' is not accepted by the pattern '10\.[0-9]{4,9}/.{1,200}'.
Element doi: [facet 'pattern'] The value '10/ghh97z' is not accepted by the pattern '10\.[0-9]{4,9}/.{1,200}'.

@csoneson
Copy link
Member

@martinfleis It seems the editorial bot is not happy with the two short DOIs (although the links in the bibliography go to the right place). There are long forms for both these DOIs - would you mind switching to those?

@csoneson
Copy link
Member

@openjournals/dev - I'm not sure whether this is an expected error, or if these short DOIs should be accepted ☝🏻. The links in the bibliography are https://doi.org/ghh97z and https://doi.org/ggj45f, which do point to the right places.

@martinfleis
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41592-019-0686-2 is OK
- 10.1016/j.habitatint.2022.102641 is OK
- 10.1038/s41597-022-01640-8 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1158/1538-7445.AM2022-5038 is OK
- 10.1016/j.dib.2022.108335 is OK
- 10.1177/1536867X0200200405 is OK
- 10.1007/BF02915278 is OK
- 10.1016/j.compenvurbsys.2022.101802 is OK
- 10.1115/IPC2022-87145 is OK
- 10.1016/j.jag.2022.102911 is OK
- 10.3390/info11040193 is OK
- 10.5281/zenodo.3960218 is OK
- 10.1007/s12061-022-09490-y is OK
- 10.1016/j.ins.2021.10.004 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@martinfleis
Copy link

@csoneson should be fixed in the bib file now.

@csoneson
Copy link
Member

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@csoneson
Copy link
Member

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41592-019-0686-2 is OK
- 10.1016/j.habitatint.2022.102641 is OK
- 10.1038/s41597-022-01640-8 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1158/1538-7445.AM2022-5038 is OK
- 10.1016/j.dib.2022.108335 is OK
- 10.1177/1536867X0200200405 is OK
- 10.1007/BF02915278 is OK
- 10.1016/j.compenvurbsys.2022.101802 is OK
- 10.1115/IPC2022-87145 is OK
- 10.1016/j.jag.2022.102911 is OK
- 10.3390/info11040193 is OK
- 10.5281/zenodo.3960218 is OK
- 10.1007/s12061-022-09490-y is OK
- 10.1016/j.ins.2021.10.004 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4520, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Aug 30, 2023
@gkthiruvathukal
Copy link

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Fleischmann
  given-names: Martin
  orcid: "https://orcid.org/0000-0003-3319-3366"
doi: 10.5281/zenodo.8202396
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Fleischmann
    given-names: Martin
    orcid: "https://orcid.org/0000-0003-3319-3366"
  date-published: 2023-09-02
  doi: 10.21105/joss.05240
  issn: 2475-9066
  issue: 89
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5240
  title: "Clustergram: Visualization and diagnostics for cluster
    analysis"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05240"
  volume: 8
title: "Clustergram: Visualization and diagnostics for cluster analysis"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05240 joss-papers#4526
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05240
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Sep 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Jupyter Notebook published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

6 participants