Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: rbmi: A R package for standard and reference-based multiple imputation methods #4251

Closed
editorialbot opened this issue Mar 18, 2022 · 73 comments
Assignees
Labels
accepted C++ published Papers published in JOSS R recommend-accept Papers recommended for acceptance in JOSS. review Stan

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Mar 18, 2022

Submitting author: @nociale (Alessandro Noci)
Repository: https://github.com/insightsengineering/rbmi
Branch with paper.md (empty if default branch):
Version: v1.1.4
Editor: @fboehm
Reviewers: @DanielRivasMD, @JoranTiU
Archive: 10.5281/zenodo.6632154

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/c23a2dbed03cd0c8a1e790de9b078a7a"><img src="https://joss.theoj.org/papers/c23a2dbed03cd0c8a1e790de9b078a7a/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/c23a2dbed03cd0c8a1e790de9b078a7a/status.svg)](https://joss.theoj.org/papers/c23a2dbed03cd0c8a1e790de9b078a7a)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@DanielRivasMD & @JoranTiU, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @fboehm know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @JoranTiU

📝 Checklist for @DanielRivasMD

@editorialbot editorialbot added C++ R review Stan waitlisted Submissions in the JOSS backlog due to reduced service mode. labels Mar 18, 2022
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.20 s (472.0 files/s, 134722.0 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
R                               65           3679           3808          12491
HTML                             3            107              6           2847
TeX                              2            101              0            768
Markdown                         5            187              0            652
Rmd                              3            395            761            452
YAML                            12            105             15            446
SAS                              1             36             22             81
JSON                             1              0              0             60
Dockerfile                       1              8              0             47
Bourne Shell                     1              4              1             34
C/C++ Header                     1              0              1              0
-------------------------------------------------------------------------------
SUM:                            95           4622           4614          17878
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 940

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- None

MISSING DOIs

- 10.1093/biomet/86.4.948 may be a valid DOI for title: Miscellanea. Small-sample degrees of freedom with multiple imputation
- 10.1177/0962280220932189 may be a valid DOI for title: Bootstrap inference for multiple imputation under uncongeniality and misspecification
- 10.1002/(sici)1097-0258(20000515)19:9<1141::aid-sim479>3.0.co;2-f may be a valid DOI for title: Bootstrap confidence intervals: when, which, what? A practical guide for medical statisticians
- 10.1080/10543406.2013.834911 may be a valid DOI for title: Analysis of longitudinal trials with protocol deviation: a framework for relevant, accessible assumptions, and inference via multiple imputation
- 10.1111/rssa.12423 may be a valid DOI for title: Information-anchored sensitivity analysis: Theory and application
- 10.1002/sim.8569 may be a valid DOI for title: Sensitivity analysis for clinical trials with missing continuous outcome data using controlled multiple imputation: a practical guide
- 10.1002/pst.2019 may be a valid DOI for title: The attributable estimand: a new approach to account for intercurrent events
- 10.1080/07474930008800459 may be a valid DOI for title: Bootstrap tests: How many bootstraps?
- 10.1080/19466315.2020.1736141 may be a valid DOI for title: The Use of a Variable Representing Compliance Improves Accuracy of Estimation of the Effect of Treatment Allocation Regardless of Discontinuation in Trials with Incomplete Follow-up
- 10.1111/j.1540-5907.2010.00447.x may be a valid DOI for title: What to do about missing values in time-series cross-section data
- 10.1214/aos/1043351257 may be a valid DOI for title: A unified jackknife theory for empirical best prediction with M-estimation
- 10.1080/10543406.2015.1094810 may be a valid DOI for title: On analysis of longitudinal clinical trials with missing data using reference-based imputation
- 10.1177/009286150804200402 may be a valid DOI for title: Recommendations for the primary analysis of continuous endpoints in longitudinal clinical trials
- 10.1177/2168479019836979 may be a valid DOI for title: Aligning estimators with estimands in clinical trials: putting the ICH E9 (R1) guidelines into practice
- 10.1093/biomet/58.3.545 may be a valid DOI for title: Recovery of inter-block information when block sizes are unequal
- 10.1080/19466315.2019.1689845 may be a valid DOI for title: Aligning Treatment Policy Estimands and Estimators—A Simulation Study in Alzheimer’s Disease
- 10.1080/10543406.2014.928306 may be a valid DOI for title: Comment on “Analysis of longitudinal trials with protocol deviations: A framework for relevant, accessible assumptions, and inference via multiple imputation,” by Carpenter, Roger, and Kenward
- 10.1080/10543401003777995 may be a valid DOI for title: MMRM versus MI in dealing with missing data—a comparison based on 25 NDA data sets
- 10.1177/0962280216683570 may be a valid DOI for title: Should multiple imputation be the method of choice for handling missing data in randomized trials?
- 10.1111/biom.12702 may be a valid DOI for title: On the multiple imputation variance estimator for control-based and delta-adjusted pattern mixture models
- 10.1214/20-sts793 may be a valid DOI for title: Maximum likelihood multiple imputation: Faster imputations and consistent standard errors without posterior draws
- 10.1093/biomet/85.4.935 may be a valid DOI for title: Large-sample theory for parametric multiple imputation procedures
- 10.1080/10543406.2019.1684308 may be a valid DOI for title: A causal modelling framework for reference-based imputation and tipping point analysis in clinical trials with quantitative outcome

INVALID DOIs

- None

@fboehm
Copy link

fboehm commented Mar 18, 2022

@DanielRivasMD and @JoranTiU - Please find above instructions for getting started with the reviews. The first task is to generate the checklists with the syntax mentioned above. Please let me know if you have any questions :)

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@fboehm
Copy link

fboehm commented Mar 23, 2022

@DanielRivasMD and @JoranTiU - please feel free to generate your review checklists per the above syntax. Please let me know if you have any questions about this.
Thanks!!

@JoranTiU
Copy link

JoranTiU commented Mar 24, 2022

Review checklist for @JoranTiU

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/insightsengineering/rbmi?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@nociale) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@DanielRivasMD
Copy link

@fboehm I have checked the points on the checklist above and I imagine there is a way to generate this list and mark it

@fboehm
Copy link

fboehm commented Apr 2, 2022

@DanielRivasMD - please use the command

@editorialbot generate my checklist

Please let me know if you encounter difficulties in creating the checklist. Thanks again!

@fboehm
Copy link

fboehm commented Apr 5, 2022

@DanielRivasMD & @JoranTiU - how is the review going? Is there anything that I can help with? Thanks again!

@DanielRivasMD
Copy link

@editorialbot

@editorialbot
Copy link
Collaborator Author

I'm sorry human, I don't understand that. You can see what commands I support by typing:

@editorialbot commands

@DanielRivasMD
Copy link

@editorialbot commands

@editorialbot
Copy link
Collaborator Author

Hello @DanielRivasMD, here are the things you can ask me to do:


# List all available commands
@editorialbot commands

# Get a list of all editors's GitHub handles
@editorialbot list editors

# Check the references of the paper for missing DOIs
@editorialbot check references

# Perform checks on the repository
@editorialbot check repository

# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist

# Set a value for branch
@editorialbot set joss-paper as branch

# Generates the pdf paper
@editorialbot generate pdf

# Get a link to the complete list of reviewers
@editorialbot list reviewers

@DanielRivasMD
Copy link

DanielRivasMD commented Apr 10, 2022

Review checklist for @DanielRivasMD

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/insightsengineering/rbmi?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@nociale) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@DanielRivasMD
Copy link

@fboehm Hi, my apologies it took so long to get back to this. Now, I have taken a deeper look into the source code (I had already read the paper and looked at the repo superficially) and tested the functionality. Overall, I think it is a pretty complete package, extensibly tested, good documentation and a use case attached as a dataset (indicated in the checklist that I hope you can see above ^). I have only three issues with this project, and not entirely certain whether they fit with JOSS criteria:

  1. Community / contributors guidelines. There is only a link at the README to open issues, but no suggestions on format or further documentation.
  2. The paper does not present state of the field for comparison with other software.
  3. In order to use and test, dependencies must be installed. I would have like this to be documented at least if not automatically setup.

Please do not hesitate to come back if any point is unclear, or further issues must be discussed.

@fboehm
Copy link

fboehm commented Apr 10, 2022

Thank you, @DanielRivasMD ! I think that you make some good points in the comment. @nociale, please address the points that @DanielRivasMD made above:

  1. Community / contributors guidelines. There is only a link at the README to open issues, but no suggestions on format or further documentation.
  2. The paper does not present state of the field for comparison with other software.
  3. In order to use and test, dependencies must be installed. I would have like this to be documented at least if not automatically setup.

Thank you!

@JoranTiU
Copy link

@DanielRivasMD & @JoranTiU - how is the review going? Is there anything that I can help with? Thanks again!

It's going well :). Should finish this week :)

@JoranTiU
Copy link

@fboehm Hi, I agree with @DanielRivasMD, it is a nice and pretty complete package with good documentation. The Vigenette's are also very helpful. Next to the points raised by @DanielRivasMD (i.e., (i) There is only a link at the README to open issues, but no suggestions on format or further documentation; (ii) The paper does not present state of the field for comparison with other software, and (iii) In order to use and test, dependencies must be installed.) I had the following minor remarks:

  1. The examples provided with the functions can’t be directly run by the user (e.g., in the description of the “impute” function the example can’t be executed because the drawobj used in the example does not exist). This makes the examples less useful and also makes it harder to verify the package is working properly?
  2. In description of “draws” function: “The imputation model is a mixed effects model repeated measures (MMRM) model.” Remove first instance of the word model.
  3. In description of “draws” function: “It can be fit using frequentist maximum likelihood (ML) or restricted ML (REML)”. REML is also frequentist.
  4. In quickstart vignette: “The analysis model is an ANCOVA model with the treatment group as the primary covariate and adjustment for the baseline HAMD17 score.” This suggests group is the covariate while it is the factor.

Overall though, as I said, like the package :).

Best,
Joran

@fboehm
Copy link

fboehm commented Apr 14, 2022

@nociale - The reviews for your package are very positive. Please make the changes suggested or discuss them here in the thread. Thanks again!

@fboehm
Copy link

fboehm commented Apr 14, 2022

Thank you, @JoranTiU and @DanielRivasMD for your timely and thorough reviews. Once the suggestions are implemented, I'll ask you to verify that you're satisfied with the updates.

@JoranTiU
Copy link

You're very welcome @fboehm! 😊.
And great! 😊

@editorialbot
Copy link
Collaborator Author

Done! Archive is now 10.5281/zenodo.6632154

@fboehm
Copy link

fboehm commented Jun 14, 2022

@editorialbot set v1.1.4 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v1.1.4

@fboehm
Copy link

fboehm commented Jun 14, 2022

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1080/10543406.2013.834911 is OK
- 10.1002/sim.8569 is OK
- 10.1002/9781119013563 is OK
- 10.1214/20-STS793 is OK
- 10.1002/pst.2234 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#3277

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#3277, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Jun 14, 2022
@arfon
Copy link
Member

arfon commented Jun 14, 2022

@nociale – I made a couple of minor tweaks to the paper here, could you merge them please? insightsengineering/rbmi#367

@nociale
Copy link

nociale commented Jun 15, 2022

@arfon - Sure, I have just merged.

@arfon arfon removed the waitlisted Submissions in the JOSS backlog due to reduced service mode. label Jun 15, 2022
@arfon
Copy link
Member

arfon commented Jun 15, 2022

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1080/10543406.2013.834911 is OK
- 10.1002/sim.8569 is OK
- 10.1002/9781119013563 is OK
- 10.1214/20-STS793 is OK
- 10.1002/pst.2234 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#3282

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#3282, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@arfon
Copy link
Member

arfon commented Jun 15, 2022

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.04251 joss-papers#3283
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.04251
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Jun 15, 2022
@arfon
Copy link
Member

arfon commented Jun 15, 2022

@DanielRivasMD, @JoranTiU – many thanks for your reviews here and to @fboehm for editing this submission! JOSS relies upon the volunteer effort of people like you and we simply wouldn't be able to do this without you ✨

@nociale – your paper is now accepted and published in JOSS ⚡🚀💥

@arfon arfon closed this as completed Jun 15, 2022
@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.04251/status.svg)](https://doi.org/10.21105/joss.04251)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.04251">
  <img src="https://joss.theoj.org/papers/10.21105/joss.04251/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.04251/status.svg
   :target: https://doi.org/10.21105/joss.04251

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@nociale
Copy link

nociale commented Jun 15, 2022

That is really a great news! :)

@arfon, @fboehm Thanks a lot for helping with the review and publication process!
@DanielRivasMD @JoranTiU Again, thanks for the review and for the positive feedback!

@arfon, @fboehm May I ask you about one little detail: The web title as currently appears in a Google search contains a typo. It is "rbmi: AR package for standard and reference-based multiple ...", while it should be "rbmi: A R package for standard and reference-based multiple ...", as correctly appears in the web page. Would it be possible to write this correctly? Thanks!

@fboehm
Copy link

fboehm commented Jun 16, 2022

@nociale - can you point me to where you're seeing the erroneous title? Is there a specific url? Thanks again!

@nociale
Copy link

nociale commented Jun 21, 2022

@fboehm - The error is in the title I see on Google when I try to search for the paper (e.g. here). Not a big problem of course!

@fboehm
Copy link

fboehm commented Jun 22, 2022

Thanks for the clarification, @nociale! I'm not sure what to do about that. When I clicked on your link, the top hit had it spelled correctly, while the second had the error that you mentioned. @arfon - do you have suggestions about this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted C++ published Papers published in JOSS R recommend-accept Papers recommended for acceptance in JOSS. review Stan
Projects
None yet
Development

No branches or pull requests

6 participants