Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: bayes-toolbox: A Python package for Bayesian statistics #5526

Closed
editorialbot opened this issue Jun 8, 2023 · 82 comments
Closed
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Jun 8, 2023

Submitting author: @hyosubkim (Hyosub E. Kim)
Repository: https://github.com/hyosubkim/bayes-toolbox
Branch with paper.md (empty if default branch):
Version: 0.1.1
Editor: @samhforbes
Reviewers: @alstat, @ChristopherLucas
Archive: 10.5281/zenodo.7849408

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/1b7b8068a329b547e28d00da0ad790b2"><img src="https://joss.theoj.org/papers/1b7b8068a329b547e28d00da0ad790b2/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/1b7b8068a329b547e28d00da0ad790b2/status.svg)](https://joss.theoj.org/papers/1b7b8068a329b547e28d00da0ad790b2)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@alstat & @ChristopherLucas & @BrandonEdwards, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @samhforbes know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @alstat

📝 Checklist for @ChristopherLucas

📝 Checklist for @BrandonEdwards

@editorialbot editorialbot added Python review TeX Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences labels Jun 8, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.12 s (209.4 files/s, 127733.5 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Jupyter Notebook                 8              0          12891           1007
Python                           3            287            374            507
YAML                             3              4              4            311
Markdown                        10            114              0            265
TeX                              1              4              0             49
TOML                             1              3              0             38
-------------------------------------------------------------------------------
SUM:                            26            412          13269           2177
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.31219/osf.io/ksfyr is OK
- 10.1146/annurev-psych-122216-011845 is OK
- 10.18637/jss.v035.i04 is OK
- 10.1016/c2012-0-00477-2 is OK
- 10.1080/00031305.2016.1154108 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 453

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@samhforbes
Copy link

OK @hyosubkim, @alstat, @ChristopherLucas, @BrandonEdwards this is the review thread for the paper. All of our communications will happen here from now on.

As a reviewer, the first step is to create a checklist for your review by entering

@editorialbot generate my checklist

as the top of a new comment in this thread.

These checklists contain the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. The first comment in this thread also contains links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please link to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. We can also use EditorialBot (our bot) to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me if you have any questions/concerns.

@ChristopherLucas
Copy link

ChristopherLucas commented Jun 8, 2023

Review checklist for @ChristopherLucas

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/hyosubkim/bayes-toolbox?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@hyosubkim) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@ChristopherLucas
Copy link

ChristopherLucas commented Jun 8, 2023

@hyosubkim: I've only just started my review, but the Statement of Need, as written, seems like it could do a bit more to explain the package's contribution.

Looking at a few functions, many seem to wrap copied examples from the PyMC documentation (e.g., the function BEST() wraps from this notebook). The PyMC notebooks are a fantastic pedagogical resources, and the functions defined in bayes-toolbox hard-code many assumptions that perhaps the user ought to have to specify (can a user define a prior for any of the models implemented in bayes-toolbox without editing the source code?).

I also don't agree that being a good Bayesian has anything to do with the replication crisis (which is part of the current justification in the statement of need). In fact, the current justification makes me want to write a comment justifying frequentist statistics, which really doesn't seem like the sort of response a Statement of Need ought to trigger.

The statement of need also mentions the potential pedagogical benefit of the package, but to be honest, at least with the current level of documentation and the degree to which models are hardcoded in bayes-toolbox, I'd probably point students to the PyMC docs that you're wrapping, rather than to this package.

I don't mean to sound adversarial - I'm open to being convinced otherwise - but this is my present assessment of the paper and the evaluation of the "substantial scholarly effort" criteria.

@hyosubkim
Copy link

@ChristopherLucas : I appreciate your feedback and hope I can address some of your concerns here. First off, I definitely did NOT interpret your comments as being “adversarial”, but I do hope you’ve had a chance to go beyond the BEST function and notebook, and will examine the many other models provided as well as the documentation hosted here, as I think several of your comments are directly addressed there (e.g., choice of priors, value added by wrapping model functions, source material, etc.). I've also tried to address your initial comments below, with the hope that I can resolve any misunderstandings and we can collaboratively advance to the next stage of improving the repo and source code.

Getting to one of the major points of your critique, as I understand it, bayes-toolbox was never meant to supplant/replace what the PyMC developers have already provided. Rather, bayes-toolbox is intended to serve as a helpful adjunct to their materials as well as a useful library in its own right for any Python user wanting to execute Bayesian analogues of the most common frequentist tests with one-liners. Wrapping models inside functions, as bayes-toolbox has done, is meant to reduce friction for new users, and lower the bar to entry for those who want to explore Bayesian statistics with Python. Right now, Python users can choose between at least a couple of packages that allow for one-liners to be called in order to run classical/frequentist tests (e.g., Pingouin, SciPy). However, for Bayesian stats, there has only been Bambi, which is excellent, but it does require more advanced knowledge and familiarity with R-brms syntax. Therefore, the goal of bayes-toolbox is to fill an important gap in the Python/Bayesian community, by providing an easy-to-use module for less experienced users that makes it as simple to run Bayesian stats as it is to run frequentist stats. The PyMC developers also recognized this gap and expressed support for bayes-toolbox, which I assume is why I was invited to present on bayes-toolbox at the most recent PyMC Conference.

As far as wrapping PyMC examples, the main purposes of the BEST notebook provided in the “examples” directory is to illustrate simply the bayes-toolbox syntax (and outputs) and to show that what was previously many lines of code to run the BEST test is now reduced to a one-liner with bayes-toolbox. I explicitly state at the top of the notebook that it was adapted from the PyMC developers’ work. Indeed, much more of the original inspiration and source material for bayes-toolbox is from Jordi Warmenhoven’s Python port of the famous Kruschke textbook written for R users, both of whom I also acknowledge in several places.

Re: priors. Yes, the priors are hard-coded and require (for now) changing the source code if users want something different. This is acknowledged in the “Look before you leap” section of the documentation. The choice of priors is based on what was used in the Kruschke textbook. Thus, for new users or those with “prior paralysis”, bayes-toolbox implements the same diffuse, uninformative priors as Kruschke so as not to strongly influence posterior estimates, and to likely satisfy skeptical reviewers. I figured that users who know enough to change their priors in a principled manner will also find it easy to change the model scaffolding provided by bayes-toolbox or be better served by Bambi.

Re: your comment about whether “being a good Bayesian has anything to do with the replication crisis”, I certainly did not intend to trigger those using frequentist stats (I’m one of them, as I’ve only recently begun migrating towards more Bayesian approaches!). My point was that binary judgments about statistical “significance” or the lack thereof is problematic (as statisticians of all stripes have noted, including the ASA), and that quantifying our uncertainty using Bayesian inference is one very natural way to address this. However, I will gladly edit the language if the other reviewers also had a similar interpretation as you.

In terms of pedagogy, perhaps the question is not whether you would direct your students to the PyMC developers OR bayes-toolbox, but rather if bayes-toolbox, in conjunction with other PyMC-developed materials, would serve them well should they start learning and utilizing Bayesian stats in their own work, especially if they are learning from the Kruschke text and want to use Python, as opposed to R.

Apologies for the long response, but you gave me much to think about and address. I hope I’ve convinced you that bayes-toolbox is not a simple repackaging of what the PyMC developers have already published and that it is a unique scholarly contribution, as it provides a much-needed, streamlined Pythonic implementation of Bayesian stats, not to mention some models that are neither from the PyMC developers' materials or the Kruschke textbook (e.g., meta-analyses).

@alstat
Copy link

alstat commented Jun 10, 2023

Review checklist for @alstat

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/hyosubkim/bayes-toolbox?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@hyosubkim) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@BrandonEdwards
Copy link

BrandonEdwards commented Jun 12, 2023

Review checklist for @BrandonEdwards

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/hyosubkim/bayes-toolbox?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@hyosubkim) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@ChristopherLucas
Copy link

@hyosubkim That response is incredibly helpful, thanks. I recommend putting a bit of that in the paper and perhaps somewhere in the docs, even, as it helps people know what they're looking at when they find the package.

@hyosubkim
Copy link

Thanks, @ChristopherLucas ! Very glad to hear it. Will definitely incorporate your feedback.

@samhforbes
Copy link

Hi @alstat, @ChristopherLucas, @BrandonEdwards, I hope all is progressing well. Please let me know if I can be of any help as you go about your reviews!

@samhforbes
Copy link

Hi @alstat, @ChristopherLucas, @BrandonEdwards, I am just checking in to see how things are progressing. Please let me know if there are any delays you anticipate.

@ChristopherLucas
Copy link

ChristopherLucas commented Aug 2, 2023

The examples are generally great, though I noticed that this example references a dataframe that is never loaded: https://hyosubkim.github.io/bayes-toolbox/getting-started/ (i.e., df is referenced but never defined).

@ChristopherLucas
Copy link

ChristopherLucas commented Aug 2, 2023

I'm done with my review. I might be missing it, but I think the paper hasn't been revised yet, per @hyosubkim's thorough response to my question above. I'd also add some proper tests before finalizing this package (this seems incomplete), but other than that, it seems good to go IMO. Great work!

@hyosubkim
Copy link

The examples are generally great, though I noticed that this example references a dataframe that is never loaded: https://hyosubkim.github.io/bayes-toolbox/getting-started/ (i.e., df is referenced but never defined).

@ChristopherLucas - Thanks for pointing this out. I grabbed some lines from the hierarchical regression example and added them to the "Example Syntax" section here to make it clear how to implement bayes-toolbox.

@hyosubkim
Copy link

hyosubkim commented Aug 2, 2023

I'm done with my review. I might be missing it, but I think the paper hasn't been revised yet, per @hyosubkim's thorough response to my question above. I'd also add some proper tests before finalizing this package (this seems incomplete), but other than that, it seems good to go IMO. Great work!

Thanks, @ChristopherLucas ! Originally, I was going to wait for all three reviews to come in before editing, but I've now gone ahead and added many of your suggested edits into the online documentation. You'll see that the "Statement of Need" and "Education" portions, in particular, have more details and some verbatim quotes from my original response to you. I will now start making similar changes to the paper itself, assuming @samhforbes is in agreement with that plan.

Edited:
I've now edited the paper.md file on GitHub. To summarize, I've balanced my statement regarding the replication crisis by explicitly acknowledging some recommended frequentist approaches to improving the state of affairs. I've also added some statements to boost the motivation behind bayes-toolbox, as well as better contextualizing it within the scientific/Bayesian Python ecosystem, specifically analogizing it to statsmodels, pingouin, etc. and drawing contrasts with Bambi.

A second update:
@ChristopherLucas - Thanks for the comment re: tests. I've added information regarding validating the functionality of bayes-toolbox in a new section of the documentation (see "Functionality and Testing" here). Basically, the models have all been tested against the data and known results from the Kruschke textbook and another Python port of the Kruschke text. Importantly, formal testing of intermediate computations occur throughout in the form of inline assert statements. The size of the very small test suite you linked to is due to the relative lack of "pure" functions within bayes-toolbox to actually test with pytest (i.e., most of the functions are statistical models). After sifting through the source code again, I believe there is good coverage, but I'm certainly open to suggestions.

@hyosubkim
Copy link

@samhforbes - I've attempted to address all of @ChristopherLucas 's comments. In summary, I've fixed the example he pointed out, revised the paper.md file on GitHub and online documentation to include the information requested (primarily from this reply #5526 (comment)), and clarified my implementation of automatic tests and validating functionality. I'll now wait to hear from you on how to proceed. Thanks!

@samhforbes
Copy link

Thanks @ChristopherLucas please look and see if you are happy with these changes, and if so, update your reviewer checklist accordingly. Thanks for your comments!

@ChristopherLucas
Copy link

I appreciate these revisions, @hyosubkim. Even in this revision, I still think the replication crisis discussion is distracting and unnecessary, and that it's difficult to develop your point in a satisfying way given the necessary brevity of the article. That said, I'm happy to sign off on this, especially if @samhforbes and the other reviewers are fine with this.

Great work, congrats on the project.

@hyosubkim
Copy link

hyosubkim commented Aug 3, 2023

Really appreciate your feedback, @ChristopherLucas . I agree with your point about the discussion around the replication crisis. It does seem out of scope for such a short paper and have now removed that entire paragraph (see here). I think it reads better now and focuses more on the positives of the package. Thanks for helping to improve both the software and paper.

@hyosubkim
Copy link

@samhforbes - Looks like @ChristopherLucas has signed off on his review. Not sure where @alstat and @BrandonEdwards are in the review process, but please let me know if there's anything else for me to do at this point or if I can further facilitate the review process.

@hyosubkim
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@hyosubkim
Copy link

Hi @hyosubkim our old friend editorialbot is pulling out 6 references from your .bib file. Do you mean to include these in the paper, or are these left over?

Hi @samhforbes - those were holdovers, which I've now removed. editorialbot still seems happy. :)

@hyosubkim
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@hyosubkim
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@samhforbes
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.18637/jss.v035.i04 is OK
- 10.1016/c2012-0-00477-2 is OK
- 10.1371/journal.pcbi.1005510 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@samhforbes
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.18637/jss.v035.i04 is OK
- 10.1016/c2012-0-00477-2 is OK
- 10.1371/journal.pcbi.1005510 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/sbcs-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4712, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Oct 20, 2023
@samhforbes
Copy link

@hyosubkim I'm handing over to the EiC now, well done on a great package!

@hyosubkim
Copy link

Thanks @samhforbes ! I appreciate all of your help on this. And thanks again @alstat and @ChristopherLucas for your helpful reviews!

@oliviaguest
Copy link
Member

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.18637/jss.v035.i04 is OK
- 10.1016/c2012-0-00477-2 is OK
- 10.1371/journal.pcbi.1005510 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/sbcs-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4730, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@oliviaguest
Copy link
Member

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Kim
  given-names: Hyosub E.
  orcid: "https://orcid.org/0000-0003-0109-593X"
doi: 10.5281/zenodo.7849408
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Kim
    given-names: Hyosub E.
    orcid: "https://orcid.org/0000-0003-0109-593X"
  date-published: 2023-10-27
  doi: 10.21105/joss.05526
  issn: 2475-9066
  issue: 90
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5526
  title: "bayes-toolbox: A Python package for Bayesian statistics"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05526"
  volume: 8
title: "bayes-toolbox: A Python package for Bayesian statistics"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05526 joss-papers#4731
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05526
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Oct 27, 2023
@oliviaguest
Copy link
Member

Huge thanks to the reviewers @alstat, @ChristopherLucas and editor @samhforbes! ✨ JOSS appreciates your work and effort. ✨ Also, big congratulations to @hyosubkim! 🥳 🍾

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.05526/status.svg)](https://doi.org/10.21105/joss.05526)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.05526">
  <img src="https://joss.theoj.org/papers/10.21105/joss.05526/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.05526/status.svg
   :target: https://doi.org/10.21105/joss.05526

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences
Projects
None yet
Development

No branches or pull requests

7 participants