New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: treesiftr: An R package and server for viewing phylogenetic trees and data #35

Open
whedon opened this Issue Nov 4, 2018 · 14 comments

Comments

Projects
None yet
6 participants
@whedon
Collaborator

whedon commented Nov 4, 2018

Submitting author: @wrightaprilm (April Wright)
Repository: https://github.com/wrightaprilm/treesiftr
Version: v1.0.0
Editor: @juanklopper
Reviewer: @ethanwhite, @rachelss
Archive: Pending

Status

status

Status badge code:

HTML: <a href="http://jose.theoj.org/papers/82b5b8c6223ec0735602bbf927045913"><img src="http://jose.theoj.org/papers/82b5b8c6223ec0735602bbf927045913/status.svg"></a>
Markdown: [![status](http://jose.theoj.org/papers/82b5b8c6223ec0735602bbf927045913/status.svg)](http://jose.theoj.org/papers/82b5b8c6223ec0735602bbf927045913)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@ethanwhite & @rachelss, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/jose-reviews/invitations

The reviewer guidelines are available here: https://jose.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @juanklopper know.

Review checklist for @ethanwhite

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source for this learning module available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of a standard license? (OSI-approved for code, Creative Commons for content)
  • Version: Does the release version given match the repository release (v1.0.0)?
  • Authorship: Has the submitting author (@wrightaprilm) made visible contributions to the module? Does the full list of authors seem appropriate and complete?

Documentation

  • A statement of need: Do the authors clearly state the need for this module and who the target audience is?
  • Installation instructions: Is there a clearly stated list of dependencies?
  • Usage: Does the documentation explain how someone would adopt the module, and include examples of how to use it?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the module 2) Report issues or problems with the module 3) Seek support

Pedagogy / Instructional design (Work-in-progress: reviewers, please comment!)

  • Learning objectives: Does the module make the learning objectives plainly clear? (We don't require explicitly written learning objectives; only that they be evident from content and design.)
  • Content scope and length: Is the content substantial for learning a given topic? Is the length of the module appropriate?
  • Pedagogy: Does the module seem easy to follow? Does it observe guidance on cognitive load? (working memory limits of 7 +/- 2 chunks of information)
  • Content quality: Is the writing of good quality, concise, engaging? Are the code components well crafted? Does the module seem complete?
  • Instructional design: Is the instructional design deliberate and apparent? For example, exploit worked-example effects; effective multi-media use; low extraneous cognitive load.

JOSE paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Does the paper clearly state the need for this module and who the target audience is?
  • Description: Does the paper describe the learning materials and sequence?
  • Does it describe how it has been used in the classroom or other settings, and how someone might adopt it?
  • Could someone else teach with this module, given the right expertise?
  • Does the paper tell the "story" of how the authors came to develop it, or what their expertise is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?

Review checklist for @rachelss

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source for this learning module available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of a standard license? (OSI-approved for code, Creative Commons for content)
  • Version: Does the release version given match the repository release (v1.0.0)?
  • Authorship: Has the submitting author (@wrightaprilm) made visible contributions to the module? Does the full list of authors seem appropriate and complete?

Documentation

  • A statement of need: Do the authors clearly state the need for this module and who the target audience is?
  • Installation instructions: Is there a clearly stated list of dependencies?
  • Usage: Does the documentation explain how someone would adopt the module, and include examples of how to use it?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the module 2) Report issues or problems with the module 3) Seek support

Pedagogy / Instructional design (Work-in-progress: reviewers, please comment!)

  • Learning objectives: Does the module make the learning objectives plainly clear? (We don't require explicitly written learning objectives; only that they be evident from content and design.)
  • Content scope and length: Is the content substantial for learning a given topic? Is the length of the module appropriate?
  • Pedagogy: Does the module seem easy to follow? Does it observe guidance on cognitive load? (working memory limits of 7 +/- 2 chunks of information)
  • Content quality: Is the writing of good quality, concise, engaging? Are the code components well crafted? Does the module seem complete?
  • Instructional design: Is the instructional design deliberate and apparent? For example, exploit worked-example effects; effective multi-media use; low extraneous cognitive load.

JOSE paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Does the paper clearly state the need for this module and who the target audience is?
  • Description: Does the paper describe the learning materials and sequence?
  • Does it describe how it has been used in the classroom or other settings, and how someone might adopt it?
  • Could someone else teach with this module, given the right expertise?
  • Does the paper tell the "story" of how the authors came to develop it, or what their expertise is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
@whedon

This comment has been minimized.

Collaborator

whedon commented Nov 4, 2018

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @ethanwhite, it looks like you're currently assigned as the reviewer for this paper 🎉.

⭐️ Important ⭐️

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/jose-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/jose-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
@whedon

This comment has been minimized.

Collaborator

whedon commented Nov 4, 2018

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Collaborator

whedon commented Nov 4, 2018

@labarba

This comment has been minimized.

Member

labarba commented Nov 4, 2018

@ethanwhite, @rachelss — Thank you for agreeing to review this submission for JOSE! We are happy to welcome you to our adventure in scholarly publishing of open-source educational software and learning resources.

JOSE has published seven papers so far 😃 — have a look: http://jose.theoj.org
Check our Reviewer Guidelines and let us know if you have any questions.

@juanklopper will be your handling editor: you're in good hands.

@juanklopper

This comment has been minimized.

juanklopper commented Nov 4, 2018

@ethanwhite, @rachelss a big thank you from me too. I look forward to your input with this submission. Let me know if there are any questions. If I can't help then @labarba will most definitely have the answers.

@ethanwhite

This comment has been minimized.

Collaborator

ethanwhite commented Nov 5, 2018

@juanklopper - @wrightaprilm and I are co-authors on a Data Carpentry lesson (http://doi.org/10.5281/zenodo.570050). There are a large number of authors and I don't believe that this interferes with my objectivity, but it was in the last 4 years. I am reporting this to you as requested in the CoI link for your input.

@labarba

This comment has been minimized.

Member

labarba commented Nov 5, 2018

Thanks for reporting this, @ethanwhite. I agree that this doesn't present conflict.

@ethanwhite

This comment has been minimized.

Collaborator

ethanwhite commented Nov 5, 2018

OK, great. Thanks @labarba. I will proceed.

@ethanwhite

This comment has been minimized.

Collaborator

ethanwhite commented Nov 5, 2018

A couple of questions/comments for @wrightaprilm from the "General checks" section:

License: Does the repository contain a plain-text LICENSE file with the contents of a standard license? (OSI-approved for code, Creative Commons for content)

Following the R packaging standard the LICENSE file does not include the actual license, only the year and copyright holder. The license is given in the DESCRIPTION file. One common approach to dealing with the conflicts between R's packaging standard and the more general open source standard of having the full license in the LICENSE file is to have the full license in LICENSE.md (which is included in .Rbuildignore) and keep LICENSE in the standard R format. This is rOpenSci's general strategy. I'm not sure what JOSS or JOSE's general approach is to this issue.

Version: Does the release version given match the repository release (v1.0.0)?

The repository is lacking a current release. If you're waiting to tag 1.0.0 until other comments have been satisfied that makes sense and I'll wait until it looks like things are wrapping up and then ping this issue again about tagging a release.

@wrightaprilm

This comment has been minimized.

wrightaprilm commented Nov 8, 2018

One common approach to dealing with the conflicts between R's packaging standard and the more general open source standard of having the full license in the LICENSE file is to have the full license in LICENSE.md (which is included in .Rbuildignore) and keep LICENSE in the standard R format. This is rOpenSci's general strategy. I'm not sure what JOSS or JOSE's general approach is to this issue.

This uncertainty is why the package is the way it is right now. I put the MIT license in the description, but wasn't sure where else that info needed to be, if anywhere.

The repository is lacking a current release. If you're waiting to tag 1.0.0 until other comments have been satisfied

Bingo!

@rachelss

This comment has been minimized.

Collaborator

rachelss commented Nov 15, 2018

There are a couple of typos in paper.md - specifically [application]((https://wrightaprilm.shinyapps.io/treesiftr_app/) has an extra parenthesis in two places; the phrase "30-day paleobiological data " needs to be fixed, "non-paramtric" should be "non-parametric", "and use in " should be "which I use in". You might want to render with pandoc to check everything looks write prior to resubmission.

@juanklopper - I'd suggest that the submission requirements suggest submitters render paper.md with pandoc prior to submission in general as it's easier to proofread that way.

To install treesiftR I need ggtree. To install ggtree I need BiocManager. To install BiocManager I need R >= v3.5 . Alternatively I was able to install ggtree on my older version of R with older version of bioconductor with BiocInstaller::biocLite("ggtree"). I should probably keep everything up to date, and I realize listing dependencies is sufficient, but I bet I'm not the only one who doesn't update so including these extra instructions would help out.

@labarba

This comment has been minimized.

Member

labarba commented Nov 15, 2018

@rachelss — Our dear bot whedon generates the PDF for us right here in this issue. Scroll up and you will see:

👉 Check article proof 📄 👈

@rachelss

This comment has been minimized.

Collaborator

rachelss commented Nov 15, 2018

@wrightaprilm I could use a bit more explanation in the Instructor's Guide and the Shiny app. Or maybe I missed a piece of documentation? I went straight to Guide and app.

After navigating to the app I am instructed to subset a phylogenetic matrix. Where did the matrix come from? Maybe you could explain the data a bit first. That would help understand when I'm setting start = 1 what that means. You might put similar text on the app itself for users.

The character matrix on the righthand side of the app is not intuitive. Some words in the app to explain what these blocks are would help. Also, just having the blocks on the right doesn't make it obvious that the root / internal nodes are one color or the other - can you put colors of ancestral states on the tree?

Is the ability to view the likelihood score useful? I can see the tree is less likely with more characters but really you want to compare likelihoods for a single dataset not across datasets, and students can't do that here. But maybe parsimony isn't sufficient to really explain phylogeny estimation - it would depend on the class.

How much can Shinyapps.io handle? How many students could be using the app simultaneously? How many students across the country could be using it in a class (ie if a bunch of classes start using this will the app no longer be available)? Just wondering if there will end up being limitations that will make this difficult to use widely. You might need to provide guidance for users to prevent the app from crashing.

@juanklopper - is this how we're supposed to write these reviews?

@wrightaprilm

This comment has been minimized.

wrightaprilm commented Nov 16, 2018

Thanks, @rachelss, for these comments. I'll start working on them.

With re:

I'd suggest that the submission requirements suggest submitters render paper.md with pandoc prior to submission in general as it's easier to proofread that way.

I agree that this would be a useful addition to the author guidelines. I rendered in RStudio, which has its own idiosyncrasies. With a research paper submission, we normally think about grabbing the Latex template and rendering as you go, so that is a suggestion that would feel natural to people. Including a link to the template or preferred pandoc settings is something folks should have no real trouble with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment