Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: einprot: flexible, easy-to-use, reproducible workflows for statistical analysis of quantitative proteomics data #5750

Closed
editorialbot opened this issue Aug 9, 2023 · 53 comments
Assignees
Labels
accepted published Papers published in JOSS R recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Aug 9, 2023

Submitting author: @csoneson (Charlotte Soneson)
Repository: https://github.com/fmicompbio/einprot
Branch with paper.md (empty if default branch): joss
Version: v0.7.7
Editor: @fboehm
Reviewers: @AnthonyOfSeattle, @ByrumLab
Archive: 10.5281/zenodo.8298657

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/200654a73547392769c222793680a83a"><img src="https://joss.theoj.org/papers/200654a73547392769c222793680a83a/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/200654a73547392769c222793680a83a/status.svg)](https://joss.theoj.org/papers/200654a73547392769c222793680a83a)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@AnthonyOfSeattle & @ByrumLab, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @fboehm know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @ByrumLab

📝 Checklist for @AnthonyOfSeattle

@editorialbot editorialbot added R review TeX Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials labels Aug 9, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.13 s (868.7 files/s, 278485.0 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
R                               94           1573           5136          21405
Rmd                              5            666           1673           2448
XML                              2              0              0           1182
TeX                              3             60              0            764
Markdown                         4            157              0            346
YAML                             2             23              5            127
SQL                              1              0              0             19
-------------------------------------------------------------------------------
SUM:                           111           2479           6814          26291
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 1402

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/nmeth.4256 is OK
- 10.1101/2023.06.26.546625 is OK
- 10.1093/bioadv/vbab041 is OK
- 10.1093/bioinformatics/btaa620 is OK
- 10.1021/acs.jproteome.0c00398 is OK
- 10.1021/acs.jproteome.9b00496 is OK
- 10.1371/journal.pcbi.1009148 is OK
- 10.1093/bioinformatics/btu305 is OK
- 10.1074/mcp.M113.031591 is OK
- 10.1074/mcp.M114.041012 is OK
- 10.1016/j.cell.2015.09.053 is OK
- 10.1083/jcb.200911091 is OK
- 10.1038/nprot.2009.36 is OK
- 10.1074/mcp.RA120.002105 is OK
- 10.3390/proteomes9010015 is OK
- 10.1038/nbt.1511 is OK
- 10.1186/s12864-022-09058-7 is OK
- 10.1101/416511 is OK
- 10.1038/nmeth.3901 is OK
- 10.1016/j.jprot.2020.103669 is OK
- 10.12688/f1000research.14966.1 is OK
- 10.1038/75556 is OK
- 10.1093/genetics/iyad031 is OK
- 10.1093/nar/gky973 is OK
- 10.1093/genetics/iyab222 is OK
- 10.1093/nar/gkn1005 is OK
- 10.1038/nmeth.3252 is OK
- 10.1038/s41592-019-0654-x is OK
- 10.1016/j.molcel.2023.06.001 is OK
- 10.1093/nar/gkac610 is OK
- 10.1021/pr300273g is OK
- 10.1038/nprot.2017.147 is OK
- 10.1021/acs.jproteome.2c00441 is OK
- 10.1016/j.cell.2021.01.004 is OK
- 10.1021/acs.jproteome.2c00390 is OK
- 10.1038/s41586-018-0153-8 is OK
- 10.1371/journal.pcbi.1010752 is OK
- 10.1021/acs.jproteome.2c00812 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@ByrumLab
Copy link

ByrumLab commented Aug 9, 2023

Review checklist for @ByrumLab

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/fmicompbio/einprot?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@csoneson) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@AnthonyOfSeattle
Copy link

AnthonyOfSeattle commented Aug 14, 2023

Review checklist for @AnthonyOfSeattle

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/fmicompbio/einprot?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@csoneson) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@ByrumLab
Copy link

Hi @csoneson, thank you for the nice proteomics software package. The code is easy to install and customize analysis pipelines, which will be valuable as new mass spec methods arise. The generation of the full reports for each project enhances reproducibility for data sharing. I have a couple of minor comments that may help new users.

  1. I added a minor issue in the github repo to add the other workflow examples link to the ReadMe. Readme examples of workflows fmicompbio/einprot#12

  2. A minor question, why is the object called a SingleCellExperiment? Single cell makes me think of single-cell RNAseq or the development of single-cell proteomics technologies.

@csoneson
Copy link
Member

Thanks @ByrumLab!

  1. Thanks for this suggestion - I agree, and I have added the link to the README.
  2. The SingleCellExperiment class is a standard data container from the Bioconductor project - it was designed as an extension to the SummarizedExperiment container, with single-cell data in mind. However, despite the name, it is suitable for storing any rectangular data set (and the advantage compared to the SummarizedExperiment object is that in addition to abundance values and annotations for samples and features, it can also hold low-dimensional representations, in our case obtained by PCA). Rather than defining a new data structure, we decided to make use of this well-established container, since it has everything we need and in addition makes it possible to directly apply many functions from a variety of Bioconductor packages.

@ByrumLab
Copy link

Thanks @csoneson for the explanation. I assumed this was the case but I appreciate the explanation.

@fboehm I have no further comments and I have completed the checklist.

@csoneson
Copy link
Member

csoneson commented Aug 18, 2023

For completeness, I've added an FAQ section to the vignette, and included a note about the motivation for the use of the SingleCellExperiment - thanks @ByrumLab for raising this point!

@AnthonyOfSeattle
Copy link

AnthonyOfSeattle commented Aug 25, 2023

Hi @csoneson, congrats on this well built and complete R package. It is clear that this represents a significant effort born out of years performing fundamental analyses in quantitative proteomics. I believe it will see use both in traditional proteomics labs, who may be looking to automate their initial analyses of data, as well as will in labs taking their first steps into proteomics, who may benefit from the structured approach.

Looking into the code, I can see a github action checking both PRs and merges, which is great. To confirm all dependencies are listed, I installed the package into a clean docker image and ran tests. Everything worked just as expected. It was also good to see multiple types of user input checks at the start of functions, which appear to have feedback for users about types and errors. The package can read the output from multiple types of analysis software into the same Bioconductor object, and reports appear logically laid out and uniform in structure. Overall, I like the consistency of the design of the API. For most of the questions I had about the package, I was able to find the answers in vignette. Seems like there are a lot of customizations that individuals can specify, and you guide them through their options quite thoroughly.

I was particularly interested in diving into the statistical portion of the code. From what I can see, the code has out of the box support for single factor designs with a batch term. If a user has a mutlifactor design, say {male, female}X{control, treated}, is the proper way to handle this data to have a single group column with values chosen from {male_control, male_treated, female_control, female_treated}? I see the following quote in the section The sample annotation table:

This data.frame must have at least two columns, named sample and group, but any additional columns are also supported and will be included in the final SingleCellExperiment object.

Are additional columns being included in statistical test as well? Could a quick answer to that question be stated in the vignette, maybe with an explicit statement about the best way to handle multi-factor designs like this? I expect that there will be some users who will benefit from the protocol being explicitly laid out. In addition, I see two main packages for doing statistical test, both of which seem like good additions. However, I don't see anything in the vignette about choosing between the options. I believe you will have many individuals who come to this as their first look into proteomics, and it would be good from a completeness standpoint to have 1-2 sentences to guide them to a choice. Otherwise they will use the default or guess.

Beyond a couple additions to the vignette, I have no further requests.

Addendum: After another quick read through, I noticed the parameter singleFit again. It seems like this would be most appropriate to set to True when there is reason to believe that there is a true expression difference between combined groups. Do you agree with that? I believe there should be guidance on when to set that parameter in the vignette and not just a technical definition of what it does. I will also make a ticket to standardize the defaults for that param across functions.

@csoneson
Copy link
Member

@AnthonyOfSeattle Thanks for your thorough review and constructive comments! I have made additions to the vignette to adress them, summarized in this PR. Below are responses to your comments:

If a user has a mutlifactor design, say {male, female}X{control, treated}, is the proper way to handle this data to have a single group column with values chosen from {male_control, male_treated, female_control, female_treated}?

Yes, that would be one way (especially if the treatment effect may be gender-specific). Another option could be to consider gender as the batch effect (if treatment is the main factor of interest, and no interaction is expected).

I see the following quote in the section The sample annotation table:
This data.frame must have at least two columns, named sample and group, but any additional columns are also supported and will be included in the final SingleCellExperiment object.
Are additional columns being included in statistical test as well? Could a quick answer to that question be stated in the vignette, maybe with an explicit statement about the best way to handle multi-factor designs like this?

No, additional columns are not included in the statistical test, but they are propagated to the final SCE. I have added a sentence about this in the vignette, as well as some tips for multi-factor and more complex designs. For the current version of einprot, we made a conscious choice to make the setup of the statistical testing easy (specifically, to not require the explicit specification of a design matrix and contrasts), while attempting to still cover a reasonable range of use cases.

I see two main packages for doing statistical test, both of which seem like good additions. However, I don't see anything in the vignette about choosing between the options.

I added a section about this in the FAQ part in the end of the vignette.

Addendum: After another quick read through, I noticed the parameter singleFit again. It seems like this would be most appropriate to set to True when there is reason to believe that there is a true expression difference between combined groups. Do you agree with that? I believe there should be guidance on when to set that parameter in the vignette and not just a technical definition of what it does.

The main advantage of fitting a single model (singleFit = TRUE) is that a larger number of samples are used to estimate parameters, which usually gives more precise estimates. For this reason, this is typically the recommended approach with limma and other inference pipelines. However, it also involves making assumptions about similarities of variances between groups, and if there are large differences, either fitting separate models or using a weighting approach (which is possible within limma) may be more suitable. I have added a discussion about this in the vignette, together with some links to posts where this is discussed (as it's a general question for any data analyzed with limma).

I will also make a ticket to standardize the defaults for that param across functions.

Thank you - I have standardized the defaults across the different functions (it's now TRUE everywhere).

@AnthonyOfSeattle
Copy link

AnthonyOfSeattle commented Aug 25, 2023

Hi @csoneson, thank you for your thorough answers to my questions. I have read through your PR and think your changes completely address my concerns.

@fboehm, I have completed my checklist and have no further comments for the authors.

@fboehm
Copy link

fboehm commented Aug 28, 2023

Thank you so much, @AnthonyOfSeattle and @ByrumLab, for the thorough reviews!

@csoneson - the reviewers have recommended the submission for publication. There are a few more steps before we finalize the publication.

@fboehm
Copy link

fboehm commented Aug 28, 2023

@editorialbot commands

@editorialbot
Copy link
Collaborator Author

Hello @fboehm, here are the things you can ask me to do:


# List all available commands
@editorialbot commands

# Add to this issue's reviewers list
@editorialbot add @username as reviewer

# Remove from this issue's reviewers list
@editorialbot remove @username from reviewers

# Get a list of all editors's GitHub handles
@editorialbot list editors

# Assign a user as the editor of this submission
@editorialbot assign @username as editor

# Remove the editor assigned to this submission
@editorialbot remove editor

# Remind an author, a reviewer or the editor to return to a review after a 
# certain period of time (supported units days and weeks)
@editorialbot remind @reviewer in 2 weeks

# Check the references of the paper for missing DOIs
@editorialbot check references

# Perform checks on the repository
@editorialbot check repository

# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist

# Set a value for version
@editorialbot set v1.0.0 as version

# Set a value for branch
@editorialbot set joss-paper as branch

# Set a value for repository
@editorialbot set https://github.com/organization/repo as repository

# Set a value for the archive DOI
@editorialbot set set 10.5281/zenodo.6861996 as archive

# Mention the EiCs for the correct track
@editorialbot ping track-eic

# Generates the pdf paper
@editorialbot generate pdf

# Recommends the submission for acceptance
@editorialbot recommend-accept

# Generates a LaTeX preprint file
@editorialbot generate preprint

# Flag submission with questionable scope
@editorialbot query scope

# Get a link to the complete list of reviewers
@editorialbot list reviewers

# Creates a post-review checklist with editor and authors tasks
@editorialbot create post-review checklist

# Open the review issue
@editorialbot start review

@fboehm
Copy link

fboehm commented Aug 28, 2023

Post-Review Checklist for Editor and Authors

Additional Author Tasks After Review is Complete

  • Double check authors and affiliations (including ORCIDs)
  • Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
  • Archive the release on Zenodo/figshare/etc and post the DOI here.
  • Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
  • Make sure that the license listed for the archive is the same as the software license.

Editor Tasks Prior to Acceptance

  • Read the text of the paper and offer comments/corrections (as either a list or a PR)
  • Check the references in the paper for corrections (e.g. capitalization)
  • Check that the archive title, author list, version tag, and the license are correct
  • Set archive DOI with @editorialbot set <DOI here> as archive
  • Set version with @editorialbot set <version here> as version
  • Double check rendering of paper with @editorialbot generate pdf
  • Specifically check the references with @editorialbot check references and ask author(s) to update as needed
  • Recommend acceptance with @editorialbot recommend-accept

@fboehm
Copy link

fboehm commented Aug 28, 2023

@csoneson - I plan to read the manuscript and offer suggestions in by the end of tomorrow. I'll comment in this thread once I've completed the proofreading.

@csoneson
Copy link
Member

Thanks @fboehm - in the meanwhile I created the release and archived it on Zenodo:

  • Version tag: v0.7.7 (release)
  • Zenodo DOI: 10.5281/zenodo.8298657

@fboehm
Copy link

fboehm commented Sep 1, 2023

Thank you, @csoneson ! Something came up for me, so it will be a few days before I can proofread the paper. I apologize for the delay. I'll comment here once I've proofread it.

@csoneson
Copy link
Member

csoneson commented Sep 1, 2023

Thanks @fboehm, no worries!

@fboehm
Copy link

fboehm commented Sep 8, 2023

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@csoneson
Copy link
Member

csoneson commented Sep 8, 2023

Thanks @fboehm!

Peng et al reference has "High-Performance" with capital H and capital P. Is this as it should be? Or should these be lower case h and p?

You're right - I have made them lower-case.

Xie et al reference has "R markdown" - should this be one word?

I believe it should be two words (based on https://bookdown.org/yihui/rmarkdown/). However, the Markdown should also be capitalized (again, based on the same webpage)

I have fixed these two points 🙂

@fboehm
Copy link

fboehm commented Sep 8, 2023

Thanks so much, @csoneson!

@fboehm
Copy link

fboehm commented Sep 8, 2023

@editorialbot set 10.5281/zenodo.8298657 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.8298657

@fboehm
Copy link

fboehm commented Sep 8, 2023

@editorialbot set v0.7.7 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v0.7.7

@fboehm
Copy link

fboehm commented Sep 8, 2023

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@fboehm
Copy link

fboehm commented Sep 8, 2023

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/nmeth.4256 is OK
- 10.1101/2023.06.26.546625 is OK
- 10.1093/bioadv/vbab041 is OK
- 10.1093/bioinformatics/btaa620 is OK
- 10.1021/acs.jproteome.0c00398 is OK
- 10.1021/acs.jproteome.9b00496 is OK
- 10.1371/journal.pcbi.1009148 is OK
- 10.1093/bioinformatics/btu305 is OK
- 10.1074/mcp.M113.031591 is OK
- 10.1074/mcp.M114.041012 is OK
- 10.1016/j.cell.2015.09.053 is OK
- 10.1083/jcb.200911091 is OK
- 10.1038/nprot.2009.36 is OK
- 10.1074/mcp.RA120.002105 is OK
- 10.3390/proteomes9010015 is OK
- 10.1038/nbt.1511 is OK
- 10.1186/s12864-022-09058-7 is OK
- 10.1101/416511 is OK
- 10.1038/nmeth.3901 is OK
- 10.1016/j.jprot.2020.103669 is OK
- 10.12688/f1000research.14966.1 is OK
- 10.1038/75556 is OK
- 10.1093/genetics/iyad031 is OK
- 10.1093/nar/gky973 is OK
- 10.1093/genetics/iyab222 is OK
- 10.1093/nar/gkn1005 is OK
- 10.1038/nmeth.3252 is OK
- 10.1038/s41592-019-0654-x is OK
- 10.1016/j.molcel.2023.06.001 is OK
- 10.1093/nar/gkac610 is OK
- 10.1021/pr300273g is OK
- 10.1038/nprot.2017.147 is OK
- 10.1021/acs.jproteome.2c00441 is OK
- 10.1016/j.cell.2021.01.004 is OK
- 10.1021/acs.jproteome.2c00390 is OK
- 10.1038/s41586-018-0153-8 is OK
- 10.1371/journal.pcbi.1010752 is OK
- 10.1021/acs.jproteome.2c00812 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@fboehm
Copy link

fboehm commented Sep 8, 2023

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/nmeth.4256 is OK
- 10.1101/2023.06.26.546625 is OK
- 10.1093/bioadv/vbab041 is OK
- 10.1093/bioinformatics/btaa620 is OK
- 10.1021/acs.jproteome.0c00398 is OK
- 10.1021/acs.jproteome.9b00496 is OK
- 10.1371/journal.pcbi.1009148 is OK
- 10.1093/bioinformatics/btu305 is OK
- 10.1074/mcp.M113.031591 is OK
- 10.1074/mcp.M114.041012 is OK
- 10.1016/j.cell.2015.09.053 is OK
- 10.1083/jcb.200911091 is OK
- 10.1038/nprot.2009.36 is OK
- 10.1074/mcp.RA120.002105 is OK
- 10.3390/proteomes9010015 is OK
- 10.1038/nbt.1511 is OK
- 10.1186/s12864-022-09058-7 is OK
- 10.1101/416511 is OK
- 10.1038/nmeth.3901 is OK
- 10.1016/j.jprot.2020.103669 is OK
- 10.12688/f1000research.14966.1 is OK
- 10.1038/75556 is OK
- 10.1093/genetics/iyad031 is OK
- 10.1093/nar/gky973 is OK
- 10.1093/genetics/iyab222 is OK
- 10.1093/nar/gkn1005 is OK
- 10.1038/nmeth.3252 is OK
- 10.1038/s41592-019-0654-x is OK
- 10.1016/j.molcel.2023.06.001 is OK
- 10.1093/nar/gkac610 is OK
- 10.1021/pr300273g is OK
- 10.1038/nprot.2017.147 is OK
- 10.1021/acs.jproteome.2c00441 is OK
- 10.1016/j.cell.2021.01.004 is OK
- 10.1021/acs.jproteome.2c00390 is OK
- 10.1038/s41586-018-0153-8 is OK
- 10.1371/journal.pcbi.1010752 is OK
- 10.1021/acs.jproteome.2c00812 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/bcm-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4537, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Sep 8, 2023
@Kevin-Mattheus-Moerman
Copy link
Member

@csoneson I am the AEiC on this track and here to help process the last steps. I have check this review, the paper, your repository, and also the archive. All seems in order, so I will now proceed to accept this work in JOSS.

@Kevin-Mattheus-Moerman
Copy link
Member

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Soneson
  given-names: Charlotte
  orcid: "https://orcid.org/0000-0003-3833-2169"
- family-names: Iesmantavicius
  given-names: Vytautas
  orcid: "https://orcid.org/0000-0002-2512-9957"
- family-names: Hess
  given-names: Daniel
  orcid: "https://orcid.org/0000-0002-1642-5404"
- family-names: Stadler
  given-names: Michael B
  orcid: "https://orcid.org/0000-0002-2269-4934"
- family-names: Seebacher
  given-names: Jan
  orcid: "https://orcid.org/0000-0002-7858-2720"
contact:
- family-names: Soneson
  given-names: Charlotte
  orcid: "https://orcid.org/0000-0003-3833-2169"
doi: 10.5281/zenodo.8298657
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Soneson
    given-names: Charlotte
    orcid: "https://orcid.org/0000-0003-3833-2169"
  - family-names: Iesmantavicius
    given-names: Vytautas
    orcid: "https://orcid.org/0000-0002-2512-9957"
  - family-names: Hess
    given-names: Daniel
    orcid: "https://orcid.org/0000-0002-1642-5404"
  - family-names: Stadler
    given-names: Michael B
    orcid: "https://orcid.org/0000-0002-2269-4934"
  - family-names: Seebacher
    given-names: Jan
    orcid: "https://orcid.org/0000-0002-7858-2720"
  date-published: 2023-09-11
  doi: 10.21105/joss.05750
  issn: 2475-9066
  issue: 89
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5750
  title: "einprot: flexible, easy-to-use, reproducible workflows for
    statistical analysis of quantitative proteomics data"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05750"
  volume: 8
title: "einprot: flexible, easy-to-use, reproducible workflows for
  statistical analysis of quantitative proteomics data"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05750 joss-papers#4545
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05750
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Sep 11, 2023
@Kevin-Mattheus-Moerman
Copy link
Member

@csoneson congratulations on this JOSS publication!

Thanks for editing @fboehm !

And a special thanks to the reviewers: @AnthonyOfSeattle, @ByrumLab !!

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.05750/status.svg)](https://doi.org/10.21105/joss.05750)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.05750">
  <img src="https://joss.theoj.org/papers/10.21105/joss.05750/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.05750/status.svg
   :target: https://doi.org/10.21105/joss.05750

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@csoneson
Copy link
Member

Thank you @Kevin-Mattheus-Moerman! I get a 404 on the DOI link (https://doi.org/10.21105/joss.05750), both directly and from the list of papers on the website. Should it resolve by itself, or do you think there's an issue? Thanks!

@Kevin-Mattheus-Moerman
Copy link
Member

@csoneson I closed this issue after I noticed the DOI resolving. Sometimes one needs to refresh the page. (and in some cases in takes longer for me on Firefox). If this remains an issue we can investigate a bit more. It does resolve still/again for me now.

@csoneson
Copy link
Member

Interesting - yes, I guess a cache issue or something (I refreshed several times, and tried in different windows). In an incognito session, it resolves for me as well 🙂 Sorry for the noise, and thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS R recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials
Projects
None yet
Development

No branches or pull requests

6 participants