New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: MCMCvis: Tools to Visualize, Manipulate, and Summarize MCMC Output #640

Closed
whedon opened this Issue Mar 23, 2018 · 16 comments

Comments

Projects
None yet
4 participants
@whedon
Copy link
Collaborator

whedon commented Mar 23, 2018

Submitting author: @caseyyoungflesh (Casey Youngflesh)
Repository: https://github.com/caseyyoungflesh/MCMCvis
Version: v0.10.0
Editor: @arfon
Reviewer: @jkarreth
Archive: 10.5281/zenodo.1216120

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/9d048f52995899530cd46a2f443f1612"><img src="http://joss.theoj.org/papers/9d048f52995899530cd46a2f443f1612/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/9d048f52995899530cd46a2f443f1612/status.svg)](http://joss.theoj.org/papers/9d048f52995899530cd46a2f443f1612)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@jkarreth, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @arfon know.

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: Does the release version given match the GitHub release (v0.9.4)?
  • Authorship: Has the submitting author (@caseyyoungflesh) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
@whedon

This comment has been minimized.

Copy link
Collaborator

whedon commented Mar 23, 2018

Hello human, I'm @whedon. I'm here to help you with some common editorial tasks. @jkarreth it looks like you're currently assigned as the reviewer for this paper 🎉.

⭐️ Important ⭐️

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
@whedon

This comment has been minimized.

Copy link
Collaborator

whedon commented Mar 23, 2018

Attempting PDF compilation. Reticulating splines etc...
@arfon

This comment has been minimized.

Copy link
Member

arfon commented Mar 23, 2018

@jkarreth - please carry out your review in this issue by updating the checklist above and giving feedback in this issue. The reviewer guidelines are available here: http://joss.theoj.org/about#reviewer_guidelines

Any questions/concerns please let me know.

@whedon

This comment has been minimized.

Copy link
Collaborator

whedon commented Mar 23, 2018

@jkarreth

This comment has been minimized.

Copy link
Collaborator

jkarreth commented Mar 26, 2018

@caseyyoungflesh, this is a nifty package to speedily process MCMC output. I could absolutely see myself using it, having hacked together similar functions for my own purposes.

The package does well what it is intended to do and it covers a main step of the Bayesian workflow: process MCMC output.

Major comments

  • Please note issue #3: Functions don't seem to process stanfit objects automatically
  • It may be useful to distinguish {MCMCvis} from the {tidybayes} package, or, even better join forces. The latter can perform some, but not all of the functions of {MCMCvis}, but also provides a wider range of processing and plotting functions.

Minor comments

Two minor comments pertinent to the review criteria:

  • Example usage: it might be nice if the vignette demonstrated (at least briefly) how output from any of the samplers mentioned in the documentation (Stan, JAGS, ...) can directly be passed to the functions of this package.
  • Automated tests: there are no tests, but I can't think of a strong need. Perhaps a test comparing the output from MCMCsummary to using any base-R functions (mean() etc.) or MCMC diagnostics on MCMC output. For instance, one could test whether the relevant output of MCMCsummary is equivalent to calling coda::gelman.diag - but that function is already used in MCMCsummary itself, so the test may not really be necessary.
@caseyyoungflesh

This comment has been minimized.

Copy link

caseyyoungflesh commented Mar 29, 2018

@jkarreth Many thanks. I'll respond here once these issues are addressed.

Responses to major comments

  • Thanks for pointing that bug out. I'll have it fixed shortly.
  • 'tidybayes' appears to be an excellent package that I was not aware of until fairly recently. Its strengths lie in its integration of Bayesian model output with ggplot. I think the area where MCMCvis excels beyond what tidybayes has to offer is summarizing and manipulating Bayesian model output. As you say, the two packages both make analyzing Bayesian model output easier, though the emphasis differs. I can see the additional plotting functionality of tidybayes to be extremely useful. Combining these two packages would certainly make for a useful 'one stop shop' for dealing with Bayesian model output. I think there may be an opportunity for merging these packages in the future and I appreciate the suggestion.

Responses to minor comments

  • While all functions have examples, I agree that creating several toy models using different packages to show that the package can handle different output would be helpful for users. I will update the vignette accordingly.
  • I think including some tests will make future development easier. I will include these with the other suggested improvements.
@caseyyoungflesh

This comment has been minimized.

Copy link

caseyyoungflesh commented Mar 31, 2018

@jkarreth

Package version 0.10.0 is now on github.

  • This version fixes the stanfit bug identified above. stanfit objects are now processed correctly.
  • I have included toy models fit with JAGS and Stan in the vignette to demonstrate how output from different samplers can be input into the MCMCvis functions directly.
  • I have also included automated tests to make sure that different input types are properly processed (to avoid the previously noted stanfit issue).
  • This version also includes several other features, as noted in NEWS.md

MCMCvis version 0.10.0 has passed all CRAN checks. A new version will be submitted to CRAN soon. The most recent version github version can be installed with:

devtools::install_github('caseyyoungflesh/MCMCvis', build_vignettes = TRUE)

@jkarreth

This comment has been minimized.

Copy link
Collaborator

jkarreth commented Apr 5, 2018

Version 0.10.0 looks great to me - all issues that I mentioned above are addressed.

I can check off the remaining three boxes in this review once 0.10.0 is on CRAN.

Congratulations on a nice package!

@caseyyoungflesh

This comment has been minimized.

Copy link

caseyyoungflesh commented Apr 7, 2018

Thank you very much @jkarreth! Version 0.10.0 is now on CRAN (though all binaries may not be built yet).

@jkarreth

This comment has been minimized.

Copy link
Collaborator

jkarreth commented Apr 10, 2018

Looks great to me; all boxes are checked off and my review is complete from my end. Congrats!

@arfon arfon added the accepted label Apr 10, 2018

@arfon

This comment has been minimized.

Copy link
Member

arfon commented Apr 10, 2018

Excellent. Thanks @jkarreth.

@caseyyoungflesh - At this point could you make an archive of the reviewed software in Zenodo/figshare/other service and update this thread with the DOI of the archive? I can then move forward with accepting the submission.

@caseyyoungflesh

This comment has been minimized.

Copy link

caseyyoungflesh commented Apr 10, 2018

@arfon thanks!

DOI for all versions: 10.5281/zenodo.1216119

DOI for v0.10.0: 10.5281/zenodo.1216120

@arfon

This comment has been minimized.

Copy link
Member

arfon commented Apr 10, 2018

@whedon set 10.5281/zenodo.1216120 as archive

@whedon

This comment has been minimized.

Copy link
Collaborator

whedon commented Apr 10, 2018

OK. 10.5281/zenodo.1216120 is the archive.

@arfon

This comment has been minimized.

Copy link
Member

arfon commented Apr 10, 2018

@jkarreth - many thanks for your review here.

@caseyyoungflesh - your paper is now accepted into JOSS and your DOI is https://doi.org/10.21105/joss.00640 ⚡️ 🚀 💥

@arfon arfon closed this Apr 10, 2018

@whedon

This comment has been minimized.

Copy link
Collaborator

whedon commented Apr 10, 2018

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippet:

[![DOI](http://joss.theoj.org/papers/10.21105/joss.00640/status.svg)](https://doi.org/10.21105/joss.00640)

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment