Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Submission: colocr #243

Closed
11 of 19 tasks
MahShaaban opened this issue Aug 6, 2018 · 88 comments
Closed
11 of 19 tasks

Submission: colocr #243

MahShaaban opened this issue Aug 6, 2018 · 88 comments

Comments

@MahShaaban
Copy link

MahShaaban commented Aug 6, 2018

Summary

  • What does this package do? (explain in 50 words or less):
    Conduct Co-localization Analysis of Fluorescence Microscopy Images

  • Paste the full DESCRIPTION file inside a code block below:

Package: colocr
Type: Package
Title: Conduct Co-localization Analysis of Fluorescence Microscopy Images
Version: 0.1.0
License: GPL-3
Authors@R: person("Mahmoud", "Ahmed",
    email = "mahmoud.s.fahmy@students.kasralainy.edu.eg",
    role = c("aut", "cre"))
URL: https://github.com/MahShaaban/colocr
BugReports: https://github.com/MahShaaban/colocr/issues
Description: Automate the co-localization analysis of fluoresence microscopy 
  images. Selecting regions of interest, extract pixel intensities from 
  the image channels and calculate different co-localization statistics.
Encoding: UTF-8
LazyData: true
Suggests: testthat,
    covr,
    knitr,
    rmarkdown,
    devtools
RoxygenNote: 6.0.1
Imports: imager,
  shiny,
  scales
VignetteBuilder: knitr
  • URL for the package (the development repository, not a stylized html page):
    https://github.com/MahShaaban/colocr

  • Please indicate which category or categories from our package fit policies this package falls under *and why(? (e.g., data retrieval, reproducibility. If you are unsure, we suggest you make a pre-submission inquiry.):

[e.g., "data extraction, because the package parses a scientific data file format"]
data extraction

  •   Who is the target audience and what are scientific applications of this package?  
    Biologists (no advanced R required)

  • Are there other R packages that accomplish the same thing? If so, how does
    yours differ or meet our criteria for best-in-category?
    The EBImage and the imager packages contain the algorithms for dealing with images. This package uses imager but is focused only on the functionality needed for conducting co-localization of multichannel images. In addition, the package provide a shiny app to inteactively perform the same analysis.

  •   If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted.
    A pre-submission, Pre-submisstion Inquiry: colocr #241

Requirements

Confirm each of the following by checking the box. This package:

  • does not violate the Terms of Service of any service it interacts with.
  • has a CRAN and OSI accepted license.
  • contains a README with instructions for installing the development version.
  • includes documentation with examples for all functions.
  • contains a vignette with examples of its essential functions and uses.
  • has a test suite.
  • has continuous integration, including reporting of test coverage, using services such as Travis CI, Coveralls and/or CodeCov.
  • I agree to abide by ROpenSci's Code of Conduct during the review process and in maintaining my package should it be accepted.

Publication options

  • Do you intend for this package to go on CRAN?
  • Do you wish to automatically submit to the Journal of Open Source Software? If so:
    • The package has an obvious research application according to JOSS's definition.
    • The package contains a paper.md matching JOSS's requirements with a high-level description in the package root or in inst/.
    • The package is deposited in a long-term repository with the DOI:
    • (Do not submit your package separately to JOSS)
  • Do you wish to submit an Applications Article about your package to Methods in Ecology and Evolution? If so:
    • The package is novel and will be of interest to the broad readership of the journal.
    • The manuscript describing the package is no longer than 3000 words.
    • You intend to archive the code for the package in a long-term repository which meets the requirements of the journal (see MEE's Policy on Publishing Code)
    • (Scope: Do consider MEE's Aims and Scope for your manuscript. We make no gaurantee that your manuscript willl be within MEE scope.)
    • (Although not required, we strongly recommend having a full manuscript prepared when you submit here.)
    • (Please do not submit your package separately to Methods in Ecology and Evolution)

Detail

  • Does R CMD check (or devtools::check()) succeed? Paste and describe any errors or warnings:

  • Does the package conform to rOpenSci packaging guidelines? Please describe any exceptions:

  • If this is a resubmission following rejection, please explain the change in circumstances:

  • If possible, please provide recommendations of reviewers - those with experience with similar packages and/or likely users of your package - and their GitHub user names:

@maelle
Copy link
Member

maelle commented Aug 7, 2018

Thanks @MahShaaban! 😀

Before I proceed with regular checks since the package has a Shiny app could you have a look at our brand new specific guidance for Shiny apps especially their testing?

https://ropensci.github.io/dev_guide/building.html#interactivegraphical-interfaces

https://ropensci.github.io/dev_guide/building.html#testing

You can ask any question here / on discuss.ropensci.org / Slack (had you gotten an invite to our Slack yet?)

@MahShaaban
Copy link
Author

MahShaaban commented Aug 8, 2018

Hey, @maelle

I had a look at links.

  • I am already using the shinytest in the app
  • Code auto-generation sounds like a great idea, do you recommend any resource to get started on that? For the time being, the app optionally returns the final output in the R session and/or save it to a file. I will add to that all the intermediate values.

I don't think I've received any invites on slack.

@maelle
Copy link
Member

maelle commented Aug 8, 2018

👋 @MahShaaban

I am already using the shinytest in the app

Ah sorry! But I don't see where in the tests? In particular, "shinytest" isn't written in DESCRIPTION like "testthat" is?

Code auto-generation sounds like a great idea, do you recommend any resource to get started on that?
No, I'll research that and update this thread (and the guide, good point!)

I've now sent you an invite to the Slack.

@MahShaaban
Copy link
Author

MahShaaban commented Aug 8, 2018

Thanks for the slack invite, @maelle

Regarding the shiny tests. The tests are in the app directory, inst/colocr_app/tests/. The reason is I wanted to test the app on Travis separately. If necessary, I will move the tests to run with the package testing.

@maelle
Copy link
Member

maelle commented Aug 8, 2018

To be honest I am very new to the testing ofShiny apps inside packages, and once your package is onboarded it'll probably be an example linked from the gitbook. 😸 Why test the app separately? What does it mean exactly? Your Travis config file looks normal to me 🤔

@MahShaaban
Copy link
Author

MahShaaban commented Aug 8, 2018

I am also new to shinytest.

I wanted to test the app separately cuz I intended to make it stand-alone and I have a prototype of it on shinyapps.io, here.

I have the app as a separate repo on github, here. This repo is added to the package repo as a submodule. The app repo contains separate .travis.yml and tests/. This way I can link the app as a regular repo on Travis and it runs the tests in with every commit.

@maelle
Copy link
Member

maelle commented Aug 8, 2018

Ah interesting! Could you ask in the forum/Slack what best practice is? One thing that I still wonder about is whether your package's README should include a badge of the app's build status.

I'll make other checks as soon as I can.

@MahShaaban
Copy link
Author

MahShaaban commented Aug 8, 2018

Alright. Will do.
I think I can add the build status badge from travis, if this what you mean.

Thanks

@maelle
Copy link
Member

maelle commented Aug 8, 2018

Do you want to look into code generation before your package undergoes review? I tagged you in a commit, see links to apps generating code here.

@maelle
Copy link
Member

maelle commented Aug 8, 2018

goodpractice output

use '<-' for assignment instead of '='. '<-' is
    the standard, and R users and developers are used it and it
    is easier to read your code for them if you use '<-'.

    R\methods.R:367:12
    R\methods.R:368:12
    R\methods.R:369:5avoid long code lines, it is bad for readability.
    Also, many people prefer editor windows that are about 80
    characters wide. Try make your lines shorter than 80
    characters

    inst\extdata\manual_analysis.R:6:1
    R\methods.R:378:1
    R\methods.R:379:1
    R\methods.R:380:1
    R\methods.R:381:1

and devtools::spell_check

WORD          FOUND IN
caculated     coloc_test.Rd:23
determin      roi_select.Rd:42
differen      roi_show.Rd:19
fluoresence   description:1
indecies      intensity_get.Rd:15
indicies      roi_show.Rd:16
manders       coloc_test.Rd:12
pearsons      coloc_test.Rd:12
reslution     roi_show.Rd:24
Thes          roi_select.Rd:36

@MahShaaban
Copy link
Author

MahShaaban commented Aug 9, 2018

I made the following changes

  • I add a table to the Table View in the app that includes the input parameters and can be used to reproduce the same output from the app. I will keep looking into the code generation for future improvement.
  • Fixed the issues raised by goodpractice
  • Fixed the spelling errors from devtools::spell_check
  • Added the app Travis and shinyapp.io badges to README.md

@maelle
Copy link
Member

maelle commented Aug 9, 2018

Cool, I'll now look for reviewers.

Could you please add this badge to the README of your package:

[![](https://badges.ropensci.org/243_status.svg)](https://github.com/ropensci/onboarding/issues/243)

@maelle
Copy link
Member

maelle commented Aug 10, 2018

Editor checks:

  • Fit: The package meets criteria for fit and overlap
  • Automated tests: Package has a testing suite and is tested via Travis-CI or another CI service.
  • License: The package has a CRAN or OSI accepted license
  • Repository: The repository link resolves correctly
  • Archive (JOSS only, may be post-review): The repository DOI resolves correctly
  • Version (JOSS only, may be post-review): Does the release version given match the GitHub release (v1.0.0)?

Editor comments

Already tackled 😉


Reviewers: @haozhu233 @seaaan
Due date: 2018-09-03

@maelle
Copy link
Member

maelle commented Aug 10, 2018

Thanks @haozhu233 and @seaaan for accepting to review this package! Your reviews are due on 2018-09-03.

Here are links to our reviewer guide and review template (as you're both experienced rOpenSci reviewers/authors comments on the guide are welcome in e.g. Slack but don't feel obliged to review both the package and the reviewing guide 😉).

@seaaan
Copy link
Contributor

seaaan commented Aug 17, 2018

Package Review

Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README
  • Installation instructions: for the development version of package and any non-standard dependencies in README
  • Vignette(s) demonstrating major functionality that runs successfully locally
  • Function Documentation: for all exported functions in R help
  • Examples for all exported functions in R Help that run successfully locally
  • Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed.
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests: Unit tests cover essential functions of the package
    and a reasonable range of inputs and conditions. All tests pass on the local machine.
  • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing:
3


Review Comments

I tested on two computers: Windows 10, running R 3.5.1 and Ubuntu 16.04 running R 3.4.4.

colocr provides an R-based workflow for a type of microscopy analysis. Specifically, images are taken of two different colors, typically with each color representing a different molecule of interest. With colocr, users can measure how much the two colors correlate in their intensity in different parts of the image.The steps are: select (and verify) regions of interest in the image, get (and view) the intensity of each pixel in each channel, and calculate two measures of correlation to assess co-localization of the two channels.

The method of colocalization analysis is not new; the purpose of colocr is to make it easier to perform in R (while also providing a GUI option) for more reproducible results. The package is pretty simple and easy to use for this specific task, as opposed to more complex packages like imager.

Below I have written comments based on the checkbox sections listed above. An overview of the most significant issues that I discovered:

  • I had difficulty building the package and documentation, but when installed via install_github, things mostly worked fine. A number of automated checks gave warnings or errors when built locally. (This bullet point is the main reason that I didn't check some of the boxes above.)

  • The vignette was lacking some detail and context.

  • The package could benefit from functions to process multiple images at once.

More minor issues are named below. (Note that I have pointed out typos using this format: "the typo" -> "the correction".)

The package is pretty nicely done and most of my comments are relatively minor. Good job!

Documentation

Statement of need

Good overview!

"at loss" -> "at a loss"
"straight forward" -> "straightforward"

(Note that those two corrections should also be made in the vignette.)

Installation

I don't understand what the "no submodules!" and "@development" mean in the installation guidelines. Can you explain (on the README) what that means and why?

It took me a while to get the package installed on Ubuntu. Specifically, imager wouldn't install because its dependencies Cairo and tiff wouldn't install until I installed some external-to-R dependencies. I think that if I had followed the installation instructions here it would have gone smoothly. It would be helpful for you to include a link to how to install imager on ubuntu in the installation instructions on github.

Vignette

Unsuccessful building of vignette

The vignette did not build successfully on either the Windows or the Ubuntu computer. This also prevented devtools::check() from succeeding. Specifically, I would get this error:

pandoc.exe: Could not fetch
: openBinaryFile: does not exist (No such file or directory)
Error: processing vignette 'using_colocr.Rmd' failed with diagnostics:
pandoc document conversion failed with error 67
Execution halted

I figured out (via rstudio/rmarkdown#228) that changing the header of the vignette to include the following lines would fix it:

output:
  rmarkdown::html_vignette:
    self_contained: no

This fixed the issue on both the Windows and Ubuntu computers.

The vignette installed fine when installing with devtools::install_github(.., build_vignettes = TRUE).

Vignette accessibility

I suggest adding a link to an HTML version of the vignette on your github README. As it is now, I have to install the package before I can read the vignette, which is a barrier to me installing it because I can't see very well what it does. Either knit the vignette to a github markdown document and link to that or, once it's on CRAN, link to the web version of the vignette that CRAN hosts.

Vignette content

appropriatness -> appropriateness

acheived -> achieved

The reset of the anlysis -> the result (?) of the analysis

Pearson’s Coefficeint Correlation -> Pearson correlation coefficient

co-distrubuted -> co-distributed

straight forward -> straightforward

three steps only -> only three steps

functionionality -> functionality

encampasing -> encompassing

resuliontion -> resolution

Arguable -> arguably

seleciton -> selection

sampe -> sample

Although -> this sentence should be combined with the next one, otherwise "Although" doesn't make sense

rational -> rationale

separatly -> separately

through and formal -> thorough and formal

correltion -> correlation

Note that the vignette installation instructions differ from the README instructions.

Maybe hide the startup messages that appear after library(imager)

In the "Getting started" section, give some background on what the image is, what the "appropriate parameters" are and how you determine them and evaluate their appropriateness. What are the cells in the image and what are they stained for? What is the goal of the ROIs? At the example setting, the five cells in the upper middle all appear enclosed in one region together, is that the goal? I would have thought to make a separate regions for each cell.

Why do you use imager::highlight() instead of roi_show()?

Some of the background I am asking for in the "Getting started" section is provided in the "Detailed Example" section. I might just skip the "Getting started" section altogether and add a little more background to the "Detailed Example" section.

Why is the "Manders Overlap Coefficient" abbreviated with "SCC"? And why does the link go to a page about the Spearman's rank correlation coefficient?

Is the link to the shiny app running on shinyapps.io the same as the app that comes with the package?

"calling four different functions in order" but there are five listed

What are you looking for in the intensity_show plots?

Does the labeling of individual regions of interest just label the n largest contiguous regions? This is not totally clear from the vignette.

In the colocalization benchmark source, it would be helpful to know what results are expected from the particular images chosen. Does colocr achieve the expected results?

Function documentation

?roi_select: It would be nice to have a more thorough explanation of the parameters to roi_select than the help page provides, or a link to one at least. There's something funny in the details section where it says fill\codeclean, looks like in the .R file there's a stray slash on line 21.

?roi_show: in the details section, it says "a a low resolution"

?colocr::colocr-package and/or ?colocr::colocr should link to the main functions in the package, otherwise they're not very useful.

Examples

example("intensity_get") isn't very informative. I suggest printing out the head of each element of pix_int and explaining what they are. Also it says "call coloc_test" which is a mistake

example("intensity_show") runs slowly on my computer, can you subsample the image or something?

Community guidelines

There are no guidelines for how to contribute to the package.

FUNCTIONALITY

Installation

Installation from github mostly works fine. I had some issues as described in the documentation described above.

Functionality

The functionality described in the vignette and documentation is present and works just fine for the most part!

run_app() gives warnings and errors immediately on loading (argument is of length 0). It works once you load an image, but it's a bit disconcerting to see error messages.

One issue I thought about is what if you want to correlate the intensity of more than two channels?

Another issue I thought about is whether there could be a higher-level function to apply thresholds and get co-localization statistics for a bunch of images at once. I imagine relatively few cases where the user would want to repeat the code over and over for every image. Maybe a function that takes a vector of images and vectors for each argument to roi_select and then gives back the results of coloc_test for each image? You could also make functions for highlight and roi_show and intensity_show that could create some kind of report with each of the images.

I don't really understand how roi_select works with regard to channel 1 vs channel 2. Could you explain that somewhere, perhaps in the vignette? I.e. can you select an ROI based just on channel 1 and apply that to channel 1 and channel 2?

Maybe import and re-export the functions from imager that are needed for the vignette (and thus likely to be used by the user), so the user doesn't have to library(imager)

It would be nice to have more descriptive names for the results of coloc_test than $'p' and $r.

Suggested API

Currently, the basic workflow is to do:

px <- roi_select(img, threshold = 90)
pix_int <- intensity_get(img, px)
coloc_test(pix_int)

It would be nice if you could do something like this:

img %>% 
  roi_select(threshold = 90) %>% 
  intensity_get() %>% 
  coloc_test()

The benefits I see are that it's pipeable and that you don't have to separately keep track of px and pix_int in addition to img. Another benefit is that all the functions could return the same type of object. If you accept this option, you could drop the intensity_get() function all together by combining it with the roi_select function. You could make the functions pipeable and all return the same type by having each of the functions return a cimg with attributes containing px and pix_int that are used by the appropriate function or by making a new class as a subclass of cimg.

This is not a requirement by any means, just something that I think would be nice!

Performance

roi_select: with threshold = 100, you get a weird error when running highlight. With threshold > 100, you get a weird error when running roi_select, same with thresholds well below 0. I suggest enforcing 0 < threshold < 100 (i.e. throw an error if the provided value is outside that range). The errors are hard to interpret so it's not obvious that they're happening because of the value of threshold. Currently it just checks if it's numeric or not. Same for the other parameters as appropriate.

The app runs kind of slowly for me. It's not unusable by any means, but if there's some way you could speed it up (eg subsample the image to set the threshold and other parameters initially and then have a button to push to do the full analysis), that would be nice.

Automated checks

goodpractice::goodpractice()

goodpractice::goodpractice() gives me the following message on both Windows and Ubuntu:

  ✖ write unit tests for all functions, and all package code in general. 87% of code lines are
    covered by test cases.

    R/methods.R:52:NA
    R/methods.R:53:NA
    R/methods.R:54:NA
    R/methods.R:62:NA
    R/methods.R:138:NA
    ... and 12 more lines

Ubuntu-only errors:

  ✖ fix this R CMD check WARNING: LaTeX errors when creating PDF version. This typically
    indicates Rd problems.
  ✖ fix this R CMD check ERROR: Re-running with no redirection of stdout/stderr. Hmm ... looks
    like a package You may want to clean up by 'rm -rf /tmp/RtmpKNtM4d/Rd2pdf2e0446da511d'

Windows-only errors:

Warning messages:
1: In MYPREPS[[prep]](state, quiet = quiet) :
  Prep step for cyclomatic complexity failed.
2: In MYPREPS[[prep]](state, quiet = quiet) :
  Prep step for rcmdcheck failed.
devtools::check()

Passes completely on Ubuntu once vignette issue is resolved (see Vignettes section).

It results in an error on Windows in the "Checking examples" section when it runs the example for .manders, I think because .manders isn't exported.

Build process

Works fine on Ubuntu once vignette issue is resolved (see Vignettes section)

I get these warnings on Windows:

Rd warning: H:/R/colocr/man/roi_select.Rd:16: file link 'shrink' in package 'imager' does not exist and so has been treated as a topic
Rd warning: H:/R/colocr/man/roi_select.Rd:20: file link 'fill' in package 'imager' does not exist and so has been treated as a topic
Rd warning: H:/R/colocr/man/roi_select.Rd:38: file link 'shrink' in package 'imager' does not exist and so has been treated as a topic
Rd warning: H:/R/colocr/man/roi_select.Rd:39: file link 'fill' in package 'imager' does not exist and so has been treated as a topic
Tests

Tests pass.

Documentation

I get these warnings on Ubuntu:

Skipping invalid path:  .labels_add.Rd 
Skipping invalid path:  .pearson.Rd 
Skipping invalid path:  .manders.Rd 

Packaging guide

Section 1.3.2 suggests that a Shiny app should come with a mechanism to reproduce the choices made in the GUI, such as auto-generation of code. Maybe you can add a tab that generates the code that is run based on the user's selections?

Section 1.7 suggests creating a documentation website.

@maelle
Copy link
Member

maelle commented Aug 18, 2018

Thanks for your thorough review @seaaan! 😀

@MahShaaban
Copy link
Author

MahShaaban commented Aug 18, 2018

Thanks, @seaaan for your review.
I will follow with a reply to these issues.

I really like your suggested API. I made quick changes in a new branch called pipeable to try this approach. If it turned out this works better, I will rewrite the tests, app and the vignette.

@maelle
Copy link
Member

maelle commented Aug 26, 2018

@haozhu233 just a reminder that your review is due on 2018-09-03. Thank you! 😺

@haozhu233
Copy link

haozhu233 commented Aug 31, 2018

Package Review

Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README
  • Installation instructions: for the development version of package and any non-standard dependencies in README
  • Vignette(s) demonstrating major functionality that runs successfully locally
  • Function Documentation: for all exported functions in R help
  • Examples for all exported functions in R Help that run successfully locally
  • Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed.
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests: Unit tests cover essential functions of the package
    and a reasonable range of inputs and conditions. All tests pass on the local machine.
  • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing:
3

Review Comments

I reviewed the development branch of this package as it has a stand-alone folder for that shiny app. I noticed that there are some recent changes in the master branch but they are mostly about badges. I run all the checks on OSX 10.13.

This package provides a straightforward way of conducting co-localization calculation, a specific type of image analysis for microscopy images. There are only 5 main functions in this package but they are quite useful and easy to use. The author also provided a GUI in a shiny app which makes it even more user friendly. The package vignette was also well written and easy to follow. Although I personally don't have any experience with analyzing microscopy images, I was still able to follow the tutorial and achieve what was supposed to be done. I can imagine this package can be quite useful for this particular use case.

Here I'm listing out some of the most significant issues I found:

  • Right now the source codes for the shiny app sit in the inst/colocr_app folder. This approach works but if the app is considered as one of the core functionality of this package, I would recommend to put these codes in the R folder and recode it in a way that fits a R package
  • There are some issues in the shiny app itself. I will explain in details below.
  • The package vignette has numerous issues. Most of these issues have been mentioned by @seaaan and I just want to echo them as well. I couldn't do any better.
  • @seaaan's comment on the lack of functionality to work on multiple images at once also applies to the shiny app. It would be nice to let user upload a zip file and then process the images one by one. This feature is kind of important as the tabular panel sort of indicates that the users are supposed to run the analysis on many images and then gather the results together. Too many mouse-clicking will quickly make users hate to use the app.

There are also a few minor issues that might be improved. I'm going to mention them below as well.

R codes

  1. The default value for n in roi_select was defined in .labels_add which is not an exported function. It confuses RStudio and promotes a code diagnosis warning that "the value of n is not defined".

  2. In coloc_test, it would be nice to standardize the type options with the terms used in the vignette. For example, "pearsons" can become "PCC" or "pcc", or vice versa. At the same time, the option "all" should be "both".

  3. The purpose of using S3 method here seems to be just sending out a warning message if class(img) != "cimg". It seems a little unnecessary. This logic can simply be replaced by a if statement which can even be modularized to a small utility function.

  4. It might be a good idea to split the contents of methods.R into multiple R files, like roi.R & intensity.R, based on their purposes.

  5. The function name run_app is too general. Consider name it differently. For example, colocr_app

Shiny App

  1. When there is no image input, there are some error message on the screen. To solve this problem, it's recommended to put req(img1()) at the very beginning of every output object. For, example,
output$image_plot <- renderPlot({
    req(input$image1)
    par(mfrow=c(2,2), mar = rep(1, 4))
    roi_show(img = img1(),
                   px = px(),
                   labels = labs.px())
  })
  1. There is no need to define img1, labs.px, px and corr in reactiveValues. You can just start with things like this.
values <- reactiveValues()
  # load images
img1 <- reactive({
    load.image(input$image1$datapath) # note that I removed the value assignment
})
  1. For your question here (https://github.com/MahShaaban/colocr/blob/636435f57d034e02a9bc05cc7bd5d646e5dcb970/inst/colocr_app/app.R#L149), you can just say values$df <- data.frame() and that's enough.

For these three issues above, check out this modified version: https://gist.github.com/haozhu233/7a02bb4a03ccf4b5ef63acf40c53bbc7

  1. As I said above, it would be nice if users can upload a zipped file with many pictures. You can add two button for going back and forward and a simple counter to make it easy to use.

  2. It would be nice to add download to csv buttons to those two tables on the "Tabular View" tab.

  3. The correlation texts could be reformatted and become more readable.

Others

  1. I also feel like that a pkgdown site might be helpful according to the package guideline.
  2. I also really like @seaaan's suggestion on using the pipeable approach and am very excited to see it's getting implemented.

@maelle
Copy link
Member

maelle commented Aug 31, 2018

Thanks a lot for your review @haozhu233! 😀

@MahShaaban now both reviews are in.

@MahShaaban
Copy link
Author

MahShaaban commented Sep 22, 2018

I wrote two tests to check whether the output of the app is reproduced using the same input parameters. Onc test used the colocr functions and another without. The tests passed locally and on Travis, while the two of them failed on Appveyor. In addition to the fact that, these are the only tests that actually check the numeric outputs (the co-localization stats), I think that neither the app nor the colocr functions are the root of the problem.

The tests are in a file called tests/test-reproduce_app.R in this last commit

PS. @seaaan noticed before that the part of the vignette where I check the reproduction of the app output from the same input returns FALSE. The code chunk is check_equal at line 313 of the vignette. Was this happening on a windows machine?

@maelle
Copy link
Member

maelle commented Sep 22, 2018

Interesting. Can you add a more minimal example using imager and data that's not in colocr so that I might run it and we can post in imager repo?

@MahShaaban
Copy link
Author

MahShaaban commented Sep 24, 2018

I am not sure how to make a minimal example in this case. So far, I've been checking the final outputs of either colocr or imager and the tests passes locally and on travis but not appveyor. I am using multiple imager functions, and this difference could be due to any of them. So,

  • I am not sure how to identify the particular function that produces the difference.
  • imager builds on travis, only to an earlier commit, but it doesn't have appveyor integration.
  • I made the the same test in colocr/test-reproduce_app.R to run independently from colocr if that helps. Basically, I replaced the system.file() calls to download.file() from GitHub first and reading them.

My current thought is to build imager on travis and appveyor and run this test. Meanwhile I am trying to figure out a way to identify the one or more functions that is causing the issue. I am not sure this is the smarts way to do it, but I think I can save all intermediary objects from the test run in an R object and compare them to a test run on windows/appveyor.

@maelle
Copy link
Member

maelle commented Sep 24, 2018

@MahShaaban I don't see the test script with download.files()? Could you please write share a gist without testthat? I'll then run it. I was thinking seeing imager:: would help, and at each point where you use an imager function if possible use () to show the output, this way it'll be easier to compare?

@MahShaaban
Copy link
Author

MahShaaban commented Sep 24, 2018

Sorry, I forgot to link to the test script in the imager fork.
Here, is a gist of the script. I updated the second revision to remove testthat calls.

@MahShaaban
Copy link
Author

MahShaaban commented Sep 28, 2018

I traced the difference between the calculated correlations on ubuntu and windows to very first step in loading the images. I think this (dahtah/imager#41) is related, although neither the maintainer nor the user followed up on the issue yet!

I found that the pixel values of the images in the colocr package are loaded differently on the two platforms. This was at least in part for jpeg::readJPEG() which is used in imager::load.image() and colocr::image_load(). I noticed the same when tried with other images. Here is one from the jpeg package itself.

On ubuntu

> version
               _                           
platform       x86_64-pc-linux-gnu         
arch           x86_64                      
os             linux-gnu                   
system         x86_64, linux-gnu           
status                                     
major          3                           
minor          5.1                         
year           2018                        
month          07                          
day            02                          
svn rev        74947                       
language       R                           
version.string R version 3.5.1 (2018-07-02)
nickname       Feather Spray
> packageVersion('jpeg')
[1] ‘0.1.8’               
> fl <- system.file('img', 'Rlogo.jpg', package = 'jpeg')
> img <- jpeg::readJPEG(fl)
> mean(img)
[1] 0.7046421

On windows

> version
               _                           
platform       x86_64-w64-mingw32          
arch           x86_64                      
os             mingw32                     
system         x86_64, mingw32             
status                                     
major          3                           
minor          5.1                         
year           2018                        
month          07                          
day            02                          
svn rev        74947                       
language       R                           
version.string R version 3.5.1 (2018-07-02)
nickname       Feather Spray
> packageVersion('jpeg')
[1] ‘0.1.8’               
> fl <- system.file('img', 'Rlogo.jpg', package = 'jpeg')
> img <- jpeg::readJPEG(fl)
> mean(img)
[1] 0.7047047

Notice the difference starting at the 4th decimal point. I am showing the mean here, but I visually inspected the values themselves and they look different. Could that be due to instability/inaccuracies at the very small decimal points?

I couldn't go beyond that as jpeg::readJPEG itself is a call to a source file, part of CImg C++ library as far as I can tell.

> jpeg::readJPEG
function (source, native = FALSE) 
.Call("read_jpeg", if (is.raw(source)) source else path.expand(source), 
    native, PACKAGE = "jpeg")
<bytecode: 0x2d11910>
<environment: namespace:jpeg>

@MahShaaban
Copy link
Author

MahShaaban commented Sep 28, 2018

I used image_read from the magick package instead of load.image from imager and this seems to solve the issue of ubuntu/windows differences. Here, ropensci/colocr#3

@maelle
Copy link
Member

maelle commented Sep 28, 2018

Cool! In that case why not switch the whole package to magick? 😉

@jeroen
Copy link
Member

jeroen commented Sep 28, 2018

I would second maelle's suggestion to try and switch to magick. It is a much more comprehensive and reliable image toolkit. Is there particular functionality in imager that you are missing from magick?

@MahShaaban
Copy link
Author

MahShaaban commented Sep 29, 2018

Thanks, @maelle @jeroen for the suggestion. I certainly don't mind looking into that.

Although the package currently relies heavily on imager, I don't mind switching to magick. I went through the magick vignette and I think the classes and the basic image transformations that I'd need are already there. However, I don't see the equivalent/alternatives to the Morphological Operations in imager, namely shrink(), grow(), fill() and clean(). Or am I missing something in magick that could replace these?

Here is the relevant part from the NAMESPACE

importFrom(imager,clean)
importFrom(imager,fill)
importFrom(imager,grow)
importFrom(imager,shrink)
importFrom(imager,threshold)

@jeroen
Copy link
Member

jeroen commented Sep 29, 2018

Thanks, see also this issue: ropensci/magick#136

In the latest dev version of magick you can find the morphology methods with morphology_types():

> morphology_types()
 [1] "Undefined"         "Correlate"         "Convolve"          "Dilate"           
 [5] "Erode"             "Close"             "Open"              "DilateIntensity"  
 [9] "ErodeIntensity"    "CloseIntensity"    "OpenIntensity"     "DilateI"          
[13] "ErodeI"            "CloseI"            "OpenI"             "Smooth"           
[17] "EdgeOut"           "EdgeIn"            "Edge"              "TopHat"           
[21] "BottomHat"         "Hmt"               "HitNMiss"          "HitAndMiss"       
[25] "Thinning"          "Thicken"           "Distance"          "IterativeDistance"
[29] "Voronoi"       

I think the main features you use are:

  • shrink: magick::image_morphology(method = 'Erode', ...)
  • grow: magick::image_morphology(method = 'Dilate', ...)

I'm not sure what exactly imager::clean does under the hood, but the imagemagick morphology manual explains several morphology methods that can be used for cleaning. We also have a function magick::image_despeckle().

For thresholding you can try image_threshold() or you can try some of the morphology methods.

@MahShaaban
Copy link
Author

MahShaaban commented Sep 30, 2018

Sounds great. I will be looking into that. Thanks @jeroen

@maelle
Copy link
Member

maelle commented Oct 2, 2018

Approved! Thanks @MahShaaban for submitting and @seaaan @haozhu233 for your reviews! 😸

To-dos:

  • Open an issue planning to switch everything to magick over time switching dependency to magick colocr#4 (@jeroen you can chime in there, thanks again for your comments here)
  • Transfer the repo to rOpenSci's "ropensci" GitHub organization under "Settings" in your repo. I have invited you to a team that should allow you to do so. You'll be made admin once you do.
  • Add the rOpenSci footer to the bottom of your README
    " [![ropensci_footer](https://ropensci.org/public_images/ropensci_footer.png)](https://ropensci.org)"
  • Fix any links in badges for CI and coverage to point to the ropensci URL. We no longer transfer Appveyor projects to ropensci Appveyor account so after transfer of your repo to rOpenSci's "ropensci" GitHub organization the badge should be [![AppVeyor Build Status](https://ci.appveyor.com/api/projects/status/github/ropensci/pkgname?branch=master&svg=true)](https://ci.appveyor.com/project/individualaccount/pkgname).
  • We're starting to roll out software metadata files to all ropensci packages via the Codemeta initiative, see https://github.com/ropensci/codemetar/#codemetar for how to include it in your package, after installing the package - should be easy as running codemetar::write_codemeta() in the root of your package.

Should you want to awknowledge your reviewers in your package DESCRIPTION, you can do so by making them "rev"-type contributors in the Authors@R field (with their consent). More info on this here.

Welcome aboard! We'd also love a blog post about your package, either a short-form intro to it (https://ropensci.org/tech-notes/) or long-form post with more narrative about its development. (https://ropensci.org/blog/). If you are interested, @stefaniebutland will be in touch about content and timing.

We've started putting together a gitbook with our best practice and tips, this chapter starts the 3d section that's about guidance for after onboarding. Please tell us what could be improved, the corresponding repo is here.

@MahShaaban
Copy link
Author

MahShaaban commented Oct 2, 2018

Thank you, everyone.
I transferred the repo, fixed the ci links and will look into the other suggestions.

I'd like to acknowledge your contributions @maelle, @haozhu233, @seaaan, and @jeroen if you don't mind.

@maelle
Copy link
Member

maelle commented Oct 2, 2018

Awesome!

Don't acknowledge my contributions, as mentioned here "Please do not list editors as contributors. Your participation in and contribution to rOpenSci is thanks enough!" 😉 (we mean it!)

@stefaniebutland
Copy link
Member

stefaniebutland commented Oct 10, 2018

Hello @MahShaaban. Are you interested in writing a post about your package for the rOpenSci blog, either a short-form intro to it (https://ropensci.org/tech-notes/) or long-form post with more narrative about its development (https://ropensci.org/blog/)?

This link will give you many examples of blog posts by authors of onboarded packages so you can get an idea of the style and length you prefer: https://ropensci.org/tags/onboarding/.

Here are some technical and editorial guidelines for contributing a post: https://github.com/ropensci/roweb2#contributing-a-blog-post.

Please let me know what you think.

@MahShaaban
Copy link
Author

MahShaaban commented Oct 13, 2018

Thanks, @stefaniebutland for this opportunity. I'd like to write a blog post about colocr. I will read the guides first and get back to you to discuss it.

@stefaniebutland
Copy link
Member

stefaniebutland commented Oct 23, 2018

@MahShaaban What do you think about setting a deadline to submit a draft post? I'm happy to answer any questions you might have.

@MahShaaban
Copy link
Author

MahShaaban commented Oct 24, 2018

Hey @stefaniebutland. I certainly don't mind that. I read the guides you referred to earlier and I think I will go with a short post intro. The idea is to adapt parts of the vignette that explains the goal of the package and how it with examples. If this is okay, I will start right away.

@stefaniebutland
Copy link
Member

stefaniebutland commented Oct 24, 2018

If you're referring to a tech note (https://ropensci.org/technotes/), they don't require scheduling on a certain day of the week so please submit your draft when ready and I'll review it soon after.

adapt parts of the vignette that explains the goal of the package and how it with examples.

Sounds good. Make sure it's different enough from the vignette. Good if you can lay out one cool example of what you can do with the package, rather than giving several examples.

@MahShaaban
Copy link
Author

MahShaaban commented Oct 25, 2018

Okay. Thanks @stefaniebutland.

@stefaniebutland
Copy link
Member

stefaniebutland commented Oct 25, 2018

@MahShaaban, @maelle just reminded me that this is your second package onboarding! I'm quite curious about package authors' motivations to submit multiple packages e.g. are there diminishing returns on author's effort on subsequent submissions?

I know you indicated you prefer to write a tech note about colocr, but if it interests you and you see value in it for yourself, I'd love to read a blog post that features colocr as you described, but also reflects on your experiences and motivation for onboarding multiple packages.

Zero obligation to do more than you suggested! 😄

@MahShaaban
Copy link
Author

MahShaaban commented Oct 26, 2018

The truth is, I intended to write a blog post about this recent submission. The same happened the first time I submitted a package to ropensci. The reason I shied away from it is that I don't see how a detailed description of the package and the features could be different from the vignette! I am definitely willing to be educated on this. There might be different ways of writing or different aspects of the package that I should focus on while writing a blog post vs a package vignette.
I think being familiar with the submission and review process helped a lot the second time around. So the second submission was easier in this sense. In both cases, I had a very positive experience. I think the reviews and the suggestions I received improved the packages.

@stefaniebutland
Copy link
Member

stefaniebutland commented Oct 30, 2018

Sorry @MahShaaban I think I misunderstood when I thought you wanted to write a tech note. Yes your idea for a blog post: " to adapt parts of the vignette that explains the goal of the package and how it with examples" sounds good.

I don't see how a detailed description of the package and the features could be different from the vignette!

I think the blog post differs from the vignette in that the post should tell a bit of a story. Unlike a vignette, it's an opportunity to give your personal perspective on the package, like something you learned, or some really strong motivation for creating it. Was it your first Shiny app? Do you have any general tips for packages with Shiny apps? This might make the post interesting for people outside your field. (Thanks to @maelle for suggesting this to me when I asked her for advice.) Do you know of other users of your package? And how do they use it? Any of those things could go in the post.

One of the big benefits of contributing a blog post is that it can get more eyes on your work. Once published, we tweet from rOpenSci to >20,000 followers and it gets picked up by R-weekly live and R-bloggers.

With that, would you like to set a deadline for submitting a draft via pull request? Technical and editorial guidelines: https://github.com/ropensci/roweb2#contributing-a-blog-post.

@MahShaaban
Copy link
Author

MahShaaban commented Nov 2, 2018

Hey @stefaniebutland, I just submitted a PR with a first draft of the blog post, ropensci-archive/roweb2#329. Please, let me know what you think.

@maelle maelle closed this as completed Nov 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants