Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Submission: tacmagic - PET Analysis in R #280

Closed
14 of 25 tasks
eebrown opened this issue Jan 25, 2019 · 37 comments
Closed
14 of 25 tasks

Submission: tacmagic - PET Analysis in R #280

eebrown opened this issue Jan 25, 2019 · 37 comments

Comments

@eebrown
Copy link
Member

eebrown commented Jan 25, 2019

Submitting Author: Eric Brown (@eebrown)
Repository: https://github.com/eebrown/PET
Version submitted: 0.1.9
Editor: Melina Vidoni (@melvidoni)
Reviewer 1: Jon Clayden (@jonclayden)
Reviewer 2: Brandon Hurr (@bhive01)
Archive: TBD
Version accepted: TBD


  • Paste the full DESCRIPTION file inside a code block below:
Package: tacmagic
Type: Package
Title: tacmagic: PET Analysis in R
Version: 0.1.9
Authors@R: c(person("Eric", "Brown", 
                    role = c("aut", "cre"),
                    email = "eb@ericebrown.com",
                    comment = c(ORCID = "0000-0002-1575-2606")),
             person("Ariel", "Graff-Guerrero",
                    role = c("dgs")))
Description: To faciliate analysis of positron emission tomography (PET) time
    activity curve (TAC) data, and to encourage open science and replicability,
    this package supports data loading and analysis of multiple TAC file 
    formats. Functions are available to analyze loaded tac data for individual 
    participants or in batches. Major functionality includes weighted TAC 
    merging by region of interest (ROI), calculating models including SUVR and
    DVR, basic plotting functions and calculation of cut-off values. Please see
    the walkthrough vignette for a detailed overview of tacmagic functions.
Depends: R (>= 3.4)
License: GPL-3
URL: https://github.com/eebrown/PET
BugReports: https://github.com/eebrown/PET/issues
Encoding: UTF-8
LazyData: true
RoxygenNote: 6.1.1
Imports: 
    graphics, 
    grDevices, 
    pracma, 
    utils,
    R.matlab
Suggests:
    testthat,
    knitr,
    rmarkdown,
    covr
VignetteBuilder: knitr

Scope

  • Please indicate which category or categories from our package fit policies this package falls under: (Please check an appropriate box below. If you are unsure, we suggest you make a pre-submission inquiry.):

    • data retrieval
    • data extraction
    • database access
    • data munging
    • data deposition
    • reproducibility
    • geospatial data
    • text analysis
  • Explain how and why the package falls under these categories (briefly, 1-2 sentences):

Please see the presubmission inquiry as this package was deemed in scope.(#277)

This packages supports loading positron emission tomography (PET) time-activity curves (TACs) from several different major formats, to allow a common subsequent analysis pipeline within R ("data extraction"). There are functions to support merging of regional time-activity curves, an often important analysis step ("data munging"). Additional analysis is available in the package, which virtue of being open source promotes open science reproducibility.

  • Who is the target audience and what are scientific applications of this package?

Scientists working with PET data needing to analyze TAC data in R can take advantage of the loading (including batch loading) functions and basic plotting. Those who need basic non-invasive models can take advantage of the implemented models. We will invite contributors to extend the package in its support for file formats and models.

There do not appear to be packages that offer the loading and merging functions. There are packages that have some PET models implemented but not full overlap.

  • If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted.

Issue #277 - @sckott

Technical checks

Confirm each of the following by checking the box. This package:

Publication options

JOSS Options
  • The package has an obvious research application according to JOSS's definition.
    • The package contains a paper.md matching JOSS's requirements with a high-level description in the package root or in inst/.
    • The package is deposited in a long-term repository with the DOI:
    • (Do not submit your package separately to JOSS)
MEE Options
  • The package is novel and will be of interest to the broad readership of the journal.
  • The manuscript describing the package is no longer than 3000 words.
  • You intend to archive the code for the package in a long-term repository which meets the requirements of the journal (see MEE's Policy on Publishing Code)
  • (Scope: Do consider MEE's Aims and Scope for your manuscript. We make no guarantee that your manuscript will be within MEE scope.)
  • (Although not required, we strongly recommend having a full manuscript prepared when you submit here.)
  • (Please do not submit your package separately to Methods in Ecology and Evolution)

Code of conduct

@sckott
Copy link
Contributor

sckott commented Jan 25, 2019

thanks for your submission @eebrown - @melvidoni has been assigned as editor

@melvidoni
Copy link
Contributor

melvidoni commented Jan 28, 2019

Editor checks:

  • Fit: The package meets criteria for fit and overlap
  • Automated tests: Package has a testing suite and is tested via Travis-CI or another CI service.
  • License: The package has a CRAN or OSI accepted license
  • Repository: The repository link resolves correctly
  • Archive (JOSS only, may be post-review): The repository DOI resolves correctly
  • Version (JOSS only, may be post-review): Does the release version given match the GitHub release (v1.0.0)?

Editor comments

Thank you for your submission @eebrown! The output from goodpractice is perfect. I will start looking for reviewers.

Preparing: covr
Preparing: cyclocomp
Preparing: description
Preparing: lintr
Preparing: namespace
Preparing: rcmdcheck

♥ Mhm! Wonderful package! Keep up the majestic work!

Please add the badge to your repo. Edit the code below with your issue ID:

[![](https://badges.ropensci.org/<issue_id>_status.svg)](https://github.com/ropensci/software-review/issues/<issue_id>)

Reviewers: Jon Clayden (@jonclayden) & Brandon Hurr (@bhive01)
Due date: February 22nd

@eebrown
Copy link
Member Author

eebrown commented Jan 28, 2019

Thank you very much @melvidoni. I've added the badge.

@melvidoni
Copy link
Contributor

melvidoni commented Jan 28, 2019

Reviewers have been assigned: Jon Clayden (@jonclayden) & Brandon Hurr (@bhive01)
The due date for the reviews is February 22nd.

@bhive01
Copy link

bhive01 commented Feb 11, 2019

@eebrown, Is there a way to get more example files than the built in ones? I don't have a PET device to pull from. I just want to throw as much at it as I can.

@eebrown
Copy link
Member Author

eebrown commented Feb 11, 2019

Hi @bhive01, thanks a lot for agreeing to review this package! You've probably seen the data in inst/extdata, with 3 participants in .tac/.voistat format, 1 in magia's .mat format and sample data in DFT. Off hand I don't have any more publicly available analyzed data for testing but I may be able to generate some more for you.

@eebrown
Copy link
Member Author

eebrown commented Feb 12, 2019

Hi again @bhive01. I've found some .tac and .dft sample files sample_tac_files.zip from the open source PET analysis C library from the Turku PET centre (www.turkupetcentre.net). I have had to make some minor changes to the tac loading functions to support some idiosyncrasies with the files, so I have updated the devel branch with that. I am not sure what the best practice is for updating the master branch while under review, but I can merge in these changes if that's best (https://github.com/eebrown/PET/compare/devel)

Please let me know if there is something else you were hoping to test more.

@melvidoni
Copy link
Contributor

melvidoni commented Feb 12, 2019

@eebrown If the changes are not too fundamental (only for the examples), you can then merge the changes once you have received the reviews.

@bhive01
Copy link

bhive01 commented Feb 12, 2019

Package Review

Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README
  • Installation instructions: for the development version of package and any non-standard dependencies in README
  • Vignette(s) demonstrating major functionality that runs successfully locally
  • Function Documentation: for all exported functions in R help
  • Examples for all exported functions in R Help that run successfully locally
  • Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).
For packages co-submitting to JOSS

The package contains a paper.md matching JOSS's requirements with:

  • A short summary describing the high-level functionality of the software
  • Authors: A list of authors with their affiliations
  • A statement of need clearly stating problems the software is designed to solve and its target audience.
  • References: with DOIs for all those that have one (e.g. papers, datasets, software).

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed.
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests: Unit tests cover essential functions of the package
    and a reasonable range of inputs and conditions. All tests pass on the local machine.
  • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing: 7


Review Comments

Firstly, while I am very much an image analyst, "I don't do humans" (Ace Ventura 1994), so I'm a bit out of my element with PET image model outputs. For that purpose, and as far as I can tell, this package works as intended and provides meaningful data processing and plots for examining PET data output in the various formats built-in (PMOD TAC, Voistat, DFT, Magia/Matlab).
Given the sample data, exported functions with examples all work as described.

Below are the outputs from goodpractice, devtools::check, and devtools::test. Overall there are a few issues found by goodpractice::gp around unit testing, but coverage is pretty good.

devtools::check("~/PET")
Updating tacmagic documentation
Writing NAMESPACE
Loading tacmagic
Writing NAMESPACE
── Building ────────────────────────────────────────────────────────────────────────────────────────────────────────────── tacmagic ──
Setting env vars:
● CFLAGS    : -Wall -pedantic
● CXXFLAGS  : -Wall -pedantic
● CXX11FLAGS: -Wall -pedantic
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─  building ‘tacmagic_0.1.9.tar.gz’ation .../DESCRIPTION’ ...
✔  checking for file ‘tacmagic/DESCRIPTION’
─  checking extension type ... Package
─  this is package ‘tacmagic’ version ‘0.1.9’
─  package encoding: UTF-8
✔  checking package namespace information ... checking for file ‘tacmagic/DESCRIPTION’
✔  checking package dependencies (905ms)
✔  checking for hidden files and directoriesms)
✔  checking for portable file names
✔  checking for sufficient/correct file permissions ...
✔  checking serialization versions
✔  checking top-level filesa-information ...e installed ...
✔  checking for left-over files
✔  checking foreign function calls (339ms)...aded cleanly ...ependencies ...
✔  checking contents of ‘data’ directory examples ...
✔  checking installed files from ‘inst/doc’ saves ...
── R CMD check results ─────────────────────────────────────────────────────────────────────────────────────────── tacmagic 0.1.9 ────
Duration: 21.9s

0 errors ✔ | 0 warnings ✔ | 0 notes ✔
──────────────────────────────────────────────────────────── tacmagic 0.1.9 ────
devtools::test("~/PET")
Loading tacmagic
Testing tacmagic
✔ | 24       | TAC data loading and weighted averages [0.6 s]

══ Results ═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════
Duration: 3.9 s

OK:       104
Failed:   0
Warnings: 0
Skipped:  0
 [0.6 s]
library(goodpractice)
gp("~/PET")
Preparing: covr
Preparing: cyclocomp
   checking vignette meta-information ......4/q75sn_4526x2l0cj6jc6c4mm0000gn/T/RtmpZ2PERy/remotes275767f4aa2/tacmagic/DESCRIPTION’ ...
* installing *source* package ‘tacmagic’ ...
** R
** data
*** moving datasets to lazyload DB
** inst
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded
* DONE (tacmagic)
Preparing: description
Preparing: lintr
Preparing: namespace
Preparing: rcmdcheck
── GP tacmagic ──────────────────────────────────────────────────────────────────────────────────────────────────────────

It is good practice to

  ✖ write unit tests for all functions, and all package code in general. 93% of code lines are covered by test cases.

    R/batches.R:65:NA
    R/loading_utilities.R:80:NA
    R/loading_utilities.R:150:NA
    R/loading_utilities.R:151:NA
    R/loading_utilities.R:178:NA
    ... and 29 more lines

  ✖ fix this R CMD check WARNING: LaTeX errors when creating PDF version. This typically indicates Rd problems.
  ✖ fix this R CMD check ERROR: Re-running with no redirection of stdout/stderr. Hmm ... looks like a package You may want to clean up by 'rm -Rf
    /var/folders/p4/q75sn_4526x2l0cj6jc6c4mm0000gn/T//Rtmp3exf2k/Rd2pdf5f260a515b8'
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Function Naming

You have functions named after authors of papers (Logan and Aiz). I guess this is a good idea because it tells you where they came from, but perhaps they could have more generic names so in the future you could use a "method" argument if there are other methods added in the future. This suggestion comes from a place of naiveté so cum grano salis.

ls(getNamespace("tacmagic"), all.names = TRUE)
 [1] ".__NAMESPACE__."       ".__S3MethodsTable__."  ".packageName"          "batch_load"           
 [5] "batch_tm"              "batch_voistat"         "checkACQtimes"         "compare_tac_form"     
 [9] "cutoff_aiz"            "DVR_all_ref_Logan"     "DVR_ref_Logan"         "find_t_star"          
[13] "load_acq"              "load_header_DFT"       "load_ROIs_DFT"         "load_tac"             
[17] "load_tac_DFT"          "load_tac_magia"        "load_tac_PMOD"         "load_tac_voistat"     
[21] "load_tacs"             "load_voistat"          "load_vol"              "load_vol_DFT"         
[25] "model_batch"           "model_definitions"     "new_table"             "plot_ref_Logan"       
[29] "plot_tac"              "pos_anyroi"            "ref_Logan_lm"          "ref_Logan_xy"         
[33] "references"            "roi_ham_full"          "roi_ham_pib"           "roi_ham_stand"        
[37] "save_tac"              "suvr"                  "suvr_auc"              "tac_roi"              
[41] "upper_inner_fence"     "validate_tac"          "validateTACvoistat"    "vAUC"                 
[45] "vintegrate"            "volumesFromBPndPaste"  "volumesFromVoistatTAC"

Some inconsistencies in the naming convention used for functions with the use of CamelCase. https://ropensci.github.io/dev_guide/building.html#function-and-argument-naming suggests using snake_case which is mostly used. I would update the following:

  • checkACQtimes
  • validateTACvoistat
  • vintegrate
  • volumesFromBPndPaste
  • validateTACvoistat
  • volumesFromVoistatTAC
  • vAUC

Where the naming is applied it’s clear from the name what the function is responsible for.

loading.R and loading_utilities.R

Load functions serve to validate inputs and contort output files into a TAC-like format. TAC format is a very wide data.frame. Does not match "tidy" principles, but that's ok.

Might want to include appropriate units for importing from a matlab file in the example. This is done in the vignette.

f_magia <- system.file("extdata", "AD06_tac_magia.mat", package="tacmagic")
load_tac(f_magia, format = "magia", time_unit = "seconds", activity_unit = "kBq/cc")`

DFT is misspelled on line 27 as DFF in the example in loading.R

How fragile are these naming conventions?

Asked author for more test cases and some of these was tested. Changes already being made.

Would it make sense to just read in the volume information as the main function? (at least for voistat because it's included already in the file) Also applicable for DFT AFICT. not for the BPND (Binding Potential BPnd) though, because it's a separate exported CSV file.

I wonder if instead of inputting the format, you could use the input extension as "default". Would maybe make that more simple for the end user. Default being PMOD tac is probably a good one as that seems like the most common (hard to tell from Google really).

compare_tac_form() looks like it's in the wrong place in loading_utilities.R
it's not used in any of the loading functions, but is used only by the plot_tac() in tac.R
Seems like it should be moved there to me.

tac.R

functions are all about combining ROIs using the tissue definitions.
Uses weighted means to summarize based upon the tissue volume as contribution

I notice there are a few instances where you are applying attributes to a munged file. Perhaps apply_attributes() could be a function that does this. This could sit in the utilities.R file.

tac.R Plotting

I feel like perhaps the plotting functions should be separated from the TAC ROI combination function. Put compare_tac_form() and plot_tac into a plot.R file (or whatever you want to call it). plot_ref_logan() from logan.R

line 121-122 you're using Rainbow (and restricting to red -- green). Concern is that rainbow produces color ramps not good for color blindness.

I plotted everything once and it got very busy.
plot_tac(AD06, ROIs=names(AD06), time="minutes", title="PIB time activity curves for AD06")
I wonder if a warning is worth including if above a certain number of ROIs you can't really tell the difference between similar colors. Not critical.

suvr.R

Two functions that compute standardized uptake value ratios (and their AUC)
They are well-described and have examples. The examples work. I do not fully understand the meaning of their outputs, but things work as expected. I notice there are a few #TODOs around testing inputs are numeric. These functions could definitely use some love in this area.

logan.R

Functions work as described. Plot function could be removed and put into a plotting file?

cutoff.R

DVR is not defined (Distribution volume ratios) in the function descriptions (is in vignette)
PiB is not defined (Pittsburg Compound B?) in function descriptions or vignette
usage is in the walkthrough, but no examples for the help pages

batches.R and batch_models.R

Describe methods of importing and analyzing many TAC models in series (automated). Would be useful to have the example in the help file in the vignette. The generic example in the vignette is not runable.

ROI_definitions.R

This file looks good, is well described and everything returns a list of ROIs that is used by tac_roi.r

saving.R

The NA values are being stored as NaN on save. I assume this is the tac default?

It seems if you load volumes that these are stuck in the voistat or DFT files.

tacmagic.R

This file allows you to do ?tacmagic and see the exported functions of the package (i love this)

references.R

References look in order

data.R

shows how the fake data for testing the batch processing was made. clear.

@eebrown
Copy link
Member Author

eebrown commented Feb 13, 2019

Dear @bhive01, thank you very much for your thoughtful thorough review and very helpful suggestions. I look forward to incorporating them into the package and will get back to you here.

@jonclayden
Copy link

jonclayden commented Feb 13, 2019

@eebrown I'm tied up with teaching this week, so I will be a little less efficient than @bhive01, but you'll have my review next week before the deadline. All the best.

@eebrown
Copy link
Member Author

eebrown commented Feb 13, 2019

Thanks for checking in @jonclayden--and thanks for fitting this review into your schedule. I look forward to it! In the meantime I will be addressing the suggestions by @bhive01 but will keep them on the devel branch.

@bhive01
Copy link

bhive01 commented Feb 13, 2019

Just want to say thanks for writing the package @eebrown. I think it does what it says on the tin pretty well. I also want to say that you should take many of the suggestions as just that. If they make sense apply them when you can, if they don't, don't worry about it.

@eebrown
Copy link
Member Author

eebrown commented Feb 15, 2019

Response to @bhive01

Thank you again for taking the time to thoroughly review the package. I have implemented many of your suggestions on the devel branch as indicated below. I noticed you skipped the JOSS section -- I am not sure if this is done later in the review, but I would like to submit to JOSS after the reviews.

Firstly, while I am very much an image analyst, "I don't do humans" (Ace Ventura 1994), so I'm a bit out of my element with PET image model outputs. For that purpose, and as far as I can tell, this package works as intended and provides meaningful data processing and plots for examining PET data output in the various formats built-in (PMOD TAC, Voistat, DFT, Magia/Matlab).
Given the sample data, exported functions with examples all work as described.

Below are the outputs from goodpractice, devtools::check, and devtools::test. Overall there are a few issues found by goodpractice::gp around unit testing, but coverage is pretty good.

[...]

fix this R CMD check WARNING: LaTeX errors when creating PDF version. This typically indicates Rd problems.
fix this R CMD check ERROR: Re-running with no redirection of stdout/stderr. Hmm ... looks like a package You may want to clean up by 'rm -Rf
/var/folders/p4/q75sn_4526x2l0cj6jc6c4mm0000gn/T//Rtmp3exf2k/Rd2pdf5f260a515b8'
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

I am not sure what to make of the WARNING and ERROR you see with goodpractice, as I don't get those errors on my Ubuntu or macOS machine.

Function Naming

You have functions named after authors of papers (Logan and Aiz). I guess this is a good idea because it tells you where they came from, but perhaps they could have more generic names so in the future you could use a "method" argument if there are other methods added in the future. This suggestion comes from a place of naiveté so cum grano salis.

ls(getNamespace("tacmagic"), all.names = TRUE)
 [1] ".__NAMESPACE__."       ".__S3MethodsTable__."  ".packageName"          "batch_load"           
 [5] "batch_tm"              "batch_voistat"         "checkACQtimes"         "compare_tac_form"     
 [9] "cutoff_aiz"            "DVR_all_ref_Logan"     "DVR_ref_Logan"         "find_t_star"          
[13] "load_acq"              "load_header_DFT"       "load_ROIs_DFT"         "load_tac"             
[17] "load_tac_DFT"          "load_tac_magia"        "load_tac_PMOD"         "load_tac_voistat"     
[21] "load_tacs"             "load_voistat"          "load_vol"              "load_vol_DFT"         
[25] "model_batch"           "model_definitions"     "new_table"             "plot_ref_Logan"       
[29] "plot_tac"              "pos_anyroi"            "ref_Logan_lm"          "ref_Logan_xy"         
[33] "references"            "roi_ham_full"          "roi_ham_pib"           "roi_ham_stand"        
[37] "save_tac"              "suvr"                  "suvr_auc"              "tac_roi"              
[41] "upper_inner_fence"     "validate_tac"          "validateTACvoistat"    "vAUC"                 
[45] "vintegrate"            "volumesFromBPndPaste"  "volumesFromVoistatTAC"

Some inconsistencies in the naming convention used for functions with the use of CamelCase. https://ropensci.github.io/dev_guide/building.html#function-and-argument-naming suggests using snake_case which is mostly used. I would update the following:

  • checkACQtimes
  • validateTACvoistat
  • vintegrate
  • volumesFromBPndPaste
  • validateTACvoistat
  • volumesFromVoistatTAC
  • vAUC

Where the naming is applied it’s clear from the name what the function is responsible for.

Thanks for pointing this out. I've now renamed the above functions to be consistent with the naming guidelines and the rest of the package's functions. As for the author-named functions, I think I will leave as-is currently, as I could always add a more generic function after (but harder to go other way around, and it's not yet clear what parameters the generic function would need).

Commit f1ac29e.

loading.R and loading_utilities.R

Load functions serve to validate inputs and contort output files into a TAC-like format. TAC format is a very wide data.frame. Does not match "tidy" principles, but that's ok.

Might want to include appropriate units for importing from a matlab file in the example. This is done in the vignette.

f_magia <- system.file("extdata", "AD06_tac_magia.mat", package="tacmagic")
load_tac(f_magia, format = "magia", time_unit = "seconds", activity_unit = "kBq/cc")`

DFT is misspelled on line 27 as DFF in the example in loading.R

Thanks. Fixed in d49a41a.

How fragile are these naming conventions?

Asked author for more test cases and some of these was tested. Changes already being made.

Would it make sense to just read in the volume information as the main function? (at least for voistat because it's included already in the file) Also applicable for DFT AFICT. not for the BPND (Binding Potential BPnd) though, because it's a separate exported CSV file.

I wonder if instead of inputting the format, you could use the input extension as "default". Would maybe make that more simple for the end user. Default being PMOD tac is probably a good one as that seems like the most common (hard to tell from Google really).

I like the idea of making use of the file extensions in data loading. I think it is a balance of convenience vs. code safety and I think for now it makes sense to err on code safety. So instead of assigning the format from the file extension, I have added the file extension as a check that now warns the user if there is a mismatch between their specified file type versus the file extension. Commit b03af39.

compare_tac_form() looks like it's in the wrong place in loading_utilities.R
it's not used in any of the loading functions, but is used only by the plot_tac() in tac.R
Seems like it should be moved there to me.

Agreed. Commit c9ded13.

tac.R

functions are all about combining ROIs using the tissue definitions.
Uses weighted means to summarize based upon the tissue volume as contribution

I notice there are a few instances where you are applying attributes to a munged file. Perhaps apply_attributes() could be a function that does this. This could sit in the utilities.R file.

I made a function called copy_tac_attributes() to get the specific attributes from an original tac object and assign them to the copy. This will make things more convenient if more attributes are needed in the future. Commit 434b37c.

tac.R Plotting

I feel like perhaps the plotting functions should be separated from the TAC ROI combination function. Put compare_tac_form() and plot_tac into a plot.R file (or whatever you want to call it). plot_ref_logan() from logan.R

line 121-122 you're using Rainbow (and restricting to red -- green). Concern is that rainbow produces color ramps not good for color blindness.

I plotted everything once and it got very busy.
plot_tac(AD06, ROIs=names(AD06), time="minutes", title="PIB time activity curves for AD06")
I wonder if a warning is worth including if above a certain number of ROIs you can't really tell the difference between similar colors. Not critical.

I have moved the plotting functions into plot.R as suggested. As for the warnings about plotting too many ROIs, I think it may not be necessary as it would be self evident. I see the plotting function currently as a quick way to visualize tacs when performing analyses rather than for the generation of publication-ready plots, but this is obviously an area that could be developed. I think at that point it will make sense to overhaul the visuals including a better colour scheme, among other optimizations. Commit 7d2b4b1.

suvr.R

Two functions that compute standardized uptake value ratios (and their AUC)
They are well-described and have examples. The examples work. I do not fully understand the meaning of their outputs, but things work as expected. I notice there are a few #TODOs around testing inputs are numeric. These functions could definitely use some love in this area.

I created a new function validate_suvr_params() in suvr.R that handles the checks that were needed and is now used in both suvr() and suvr_auc(). Commit 6806094.

logan.R

Functions work as described. Plot function could be removed and put into a plotting file?

Done as previously noted.

cutoff.R

DVR is not defined (Distribution volume ratios) in the function descriptions (is in vignette)
PiB is not defined (Pittsburg Compound B?) in function descriptions or vignette
usage is in the walkthrough, but no examples for the help pages

Fixed. Commit efe3250.

batches.R and batch_models.R

Describe methods of importing and analyzing many TAC models in series (automated). Would be useful to have the example in the help file in the vignette. The generic example in the vignette is not runable.

I have now added a runable example to the vignette. I was avoiding it, because the batch functions rely on real-world file names and directory structures but I have hopefully made it clear how it would be different in practice. In the process I also added a new function that is useful when PMOD tacs are loaded but when combining ROIs is not desired: split_pvc(). It is useful for PMOD tac files that have both PVC-corrected and uncorrected ROIs but when only one copy is needed. It is in the tac.R file and documented. Commit 629712a. Rebuilt vignette commit b9ce86d.

ROI_definitions.R

This file looks good, is well described and everything returns a list of ROIs that is used by tac_roi.r

saving.R

The NA values are being stored as NaN on save. I assume this is the tac default?

Yes this is for consistency with PMOD files.

It seems if you load volumes that these are stuck in the voistat or DFT files.

Yes the PMOD tac files don't have volume information and saving is currently built around PMOD .tac files.

tacmagic.R

This file allows you to do ?tacmagic and see the exported functions of the package (i love this)

Thanks :)

references.R

References look in order

data.R

shows how the fake data for testing the batch processing was made. clear.

Please let me know if you have any questions/concerns about the changes.
Best,
Eric

@jonclayden
Copy link

jonclayden commented Feb 18, 2019

Package Review

Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README
  • Installation instructions: for the development version of package and any non-standard dependencies in README
  • Vignette(s) demonstrating major functionality that runs successfully locally
  • Function Documentation: for all exported functions in R help
  • Examples for all exported functions in R Help that run successfully locally
  • Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).
For packages co-submitting to JOSS

The package contains a paper.md matching JOSS's requirements with:

  • A short summary describing the high-level functionality of the software
  • Authors: A list of authors with their affiliations
  • A statement of need clearly stating problems the software is designed to solve and its target audience.
  • References: with DOIs for all those that have one (e.g. papers, datasets, software).

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed.
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests: Unit tests cover essential functions of the package and a reasonable range of inputs and conditions. All tests pass on the local machine.
  • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing: 5


Review Comments

Thanks for the opportunity to review this well-constructed and well-documented package for analysing time–activity curve (TAC) data derived from PET. The package appears to follow the requirements of the rOpenSci packaging guidelines, R CMD check and the package tests run fine on my system, and test coverage is very good at 93%. I have a few comments about the vignette and the code interface, which are laid out below. I hope they're helpful.

(NB: I have not read the other review, in the interests of objectivity, so feel free to disregard any comments that have already been addressed!)

  1. I work with MRI, not PET, so please forgive any misunderstanding on my part, but from reading the vignette it seems like TAC analysis is only part of a full PET analysis pipeline. A certain amount of external software is mentioned, which presumably carries out preprocessing, segmentation, coregistration with MRI and so on. I think it would provide useful context and clarify the package's role if you could add a diagram or textual outline of the full pipeline early in the vignette, with a clear indication of which parts are handled by tacmagic. I felt like I had a picture of this in my head by the end of the document, but it would have been helpful to have it made explicit early on.
  2. The core data structure in the package is a data frame with certain attributes and obligatory columns. I wonder whether it would be worth giving this an (informal, "S3") class, say "tac", for clarity and to give you to chance to exploit generic functions like plot() for more concise usage. You could also consider a custom print() method which summarises the data, to help the user with some sanity-checking, e.g. number of time periods, period length, number of ROIs, etc. — but that's entirely up to you.
  3. I would suggest giving the user a bit more control over standard plot characteristics, particularly colours. The rainbow palette may be hard to interpret for a user with colour vision deficiency (colour-blindness). You could approach this by adding an ellipsis (...) argument to plot_tac() and plot_ref_Logan(), and passing it on to one of the base graphics primitive functions you use.
  4. Consider also using an ellipsis argument to batch_tm(), rather than piling up arguments from suvr() and DVR_all_ref_Logan().
  5. The function names DVR_ref_Logan and DVR_all_ref_Logan seem very verbose compared to others in the package, and the need for two separate functions isn't clear to me. Couldn't a putative dvr() function take a target argument but return all (like suvr()) if no specific targets are specified?
  6. Having model-fitting parameters among the arguments to plot_ref_Logan() feels like an unnecessary convolution of its role. Perhaps you could arrange for the SUVR and DVR functions to return a data type that can be plotted without repeating the fit?
  7. I would suggest adding reference information to the function documentation using the @references tag, and removing the references() function.
  8. In the vignette the syntax help(load_tac()) is used, but this doesn't work. The function shouldn't be called when passing it to help().
  9. There are a number of typos dotted through the package: exctracted, inntegration, cerbellum, caluclation, arguements, conver, BBy.

@eebrown
Copy link
Member Author

eebrown commented Feb 19, 2019

Thank you very much @jonclayden for the thorough review and great suggestions. I will address each and report back here.

@eebrown
Copy link
Member Author

eebrown commented Feb 21, 2019

Response to @jonclayden, Review 2

  1. I work with MRI, not PET, so please forgive any misunderstanding on my part, but from reading the vignette it seems like TAC analysis is only part of a full PET analysis pipeline. A certain amount of external software is mentioned, which presumably carries out preprocessing, segmentation, coregistration with MRI and so on. I think it would provide useful context and clarify the package's role if you could add a diagram or textual outline of the full pipeline early in the vignette, with a clear indication of which parts are handled by tacmagic. I felt like I had a picture of this in my head by the end of the document, but it would have been helpful to have it made explicit early on.

Thanks for the suggestion; I have tried to clarify this in the background of the vignette, along with updates that take into account the rest of your feedback below. [devel b88c9e6]

  1. The core data structure in the package is a data frame with certain attributes and obligatory columns. I wonder whether it would be worth giving this an (informal, "S3") class, say "tac", for clarity and to give you to chance to exploit generic functions like plot() for more concise usage. You could also consider a custom print() method which summarises the data, to help the user with some sanity-checking, e.g. number of time periods, period length, number of ROIs, etc. — but that's entirely up to you.

I think this makes sense and will help with future expansions. I've used the "tac" class and changed plot_tac to a plot.tac method, and added a print.tac method for a quick summary. [devel f3d8fe3]

  1. I would suggest giving the user a bit more control over standard plot characteristics, particularly colours. The rainbow palette may be hard to interpret for a user with colour vision deficiency (colour-blindness). You could approach this by adding an ellipsis (...) argument to plot_tac() and plot_ref_Logan(), and passing it on to one of the base graphics primitive functions you use.

The other reviewer also suggested expanding the colour options. I have added the ability to specify a colour palette. [devel 4d0f181] [devel 0d98a26]

  1. Consider also using an ellipsis argument to batch_tm(), rather than piling up arguments from suvr() and DVR_all_ref_Logan().

This was a good suggestion as the code is now cleaner using the ellipsis argument, and also more flexible for custom models and future expansion. [devel 6728c31] Relatedly, I added a validate_ref_Logan_params() function to ensure proper parameters are supplied. [devel 0c1ffb1]

  1. The function names DVR_ref_Logan and DVR_all_ref_Logan seem very verbose compared to others in the package, and the need for two separate functions isn't clear to me. Couldn't a putative dvr() function take a target argument but return all (like suvr()) if no specific targets are specified?

The problem is that unlike SUVR, the Logan method is just one model of DVR, and furthermore there is the reference-region non-invasive version of the Logan graphical analysis and another version, not implemented here, that uses arterial blood sampling. So the function names appear verbose but that is to disambiguate them from other models and I think this specificity is required.

The DVR_all_ref_Logan() runs the model on all ROIs, but having the DVR_ref_Logan function that takes just 1 target can be quite helpful for testing out the model, e.g. to find an optimal t-star or explore data where the model has issues.

  1. Having model-fitting parameters among the arguments to plot_ref_Logan() feels like an unnecessary convolution of its role. Perhaps you could arrange for the SUVR and DVR functions to return a data type that can be plotted without repeating the fit?

SUVR does not need plots like the Logan graphical DVR method, but I take your point. I now have DVR_ref_Logan(), the main DVR function, outputting a list object of the calculated model and related variables as a ref_Logan class, which can be plotted with a plot() method, plot.ref_Logan(). [devel 5e169fc]

  1. I would suggest adding reference information to the function documentation using the @references tag, and removing the references() function.

Thanks, I did not know about the handy @references tag. [devel cd1fecf]

  1. In the vignette the syntax help(load_tac()) is used, but this doesn't work. The function shouldn't be called when passing it to help().

Thanks for catching this. [devel 82b7eed]

  1. There are a number of typos dotted through the package: exctracted, inntegration, cerbellum, caluclation, arguements, conver, BBy.

Thank you for noting these. [devel 6c50b73] [devel 8471eef]

Additionally, I have added some tests to cover some of the new changes. [devel fab5007]

Thanks again for the review!

@jonclayden
Copy link

jonclayden commented Feb 21, 2019

Thanks for the response, Eric. I will check out the updated package shortly, but in the meantime, I just want to follow up on my point 5.

I understand the desire to allow only one target, and to allow for the possibility of new fitting methods in the future. But my suggestion was to use a single function with a signature along the lines of

dvr <- function(tac_data, target=NULL, ref=NULL, method="logan", k2prime=NULL, t_star=NULL, error=0.10, int_method="trapz", params=NULL)

which would allow calls such as

# like DVR_ref_Logan()
AD06_DVR_fr <- dvr(AD06, target="frontal", ref="cerebellum", k2prime=0.2, t_star=0)

# like DVR_all_ref_Logan(); target=NULL, the default, implies all
AD06_DVR <- dvr(AD06, ref="cerebellum", k2prime=0.2, t_star=23)

while also improving semantic consistency and still allowing other models and/or fitting methods to be added later.

In the end I don't feel that strongly, and by all means leave it as it is, but I just wanted to make sure that it's clear what I had in mind.

@eebrown
Copy link
Member Author

eebrown commented Feb 21, 2019

Thanks for clarifying. I can do this and it makes sense. In the future I may have to switch to the ellipsis rather than the specific parameters as other DVR models need different parameters.

@jonclayden
Copy link

jonclayden commented Feb 21, 2019

Sure, that makes sense – you could keep DVR_ref_Logan() as a method-specific function and create dvr() as a wrapper with an ellipsis argument, for example.

Everything else looks good, although you may want to implement an indexing method, [.tac, or make your print method slightly more defensive so that you don't need to coerce back to a data frame when subsetting.

@eebrown
Copy link
Member Author

eebrown commented Feb 22, 2019

I changed print.tac to summary.tac because it seems to fit better and keeping print as the fallback data.frame style print output works better with other generic functions. I also created an as.tac() function to create tac objects from data frames, and added an explanation to the vignette. [devel f5cb2f9]

I added the dvr() wrapper function with tests and updated the vignette. [devel 578a89b]

@jonclayden
Copy link

jonclayden commented Feb 22, 2019

Looks good! Thanks for being so receptive – I hope the process has been helpful. I'll mark my approval on the review.

@melvidoni
Copy link
Contributor

melvidoni commented Feb 22, 2019

Thank you @jonclayden. Please, let me know what you think, @bhive01.

@bhive01
Copy link

bhive01 commented Feb 22, 2019

I have marked it for approval as well @melvidoni . Thanks @eebrown for taking onboard so many suggestions in a short period. It was a lot of work I'm sure.

@melvidoni
Copy link
Contributor

melvidoni commented Feb 25, 2019

Approved! Thanks, @eebrown for submitting and @bhive01 and @jonclayden for your reviews!

To-dos:

  • Transfer the repo to rOpenSci's "ropensci" GitHub organization under "Settings" in your repo. I have invited you to a team that should allow you to do so. You'll be made admin once you do.
  • Add the rOpenSci footer to the bottom of your README
[![ropensci_footer](https://ropensci.org/public_images/ropensci_footer.png)](https://ropensci.org)
  • Fix any links in badges for CI and coverage to point to the ropensci URL. We no longer transfer Appveyor projects to ropensci Appveyor account so after transfer of your repo to rOpenSci's "ropensci" GitHub organization the badge should be [![AppVeyor Build Status](https://ci.appveyor.com/api/projects/status/github/ropensci/pkgname?branch=master&svg=true)](https://ci.appveyor.com/project/individualaccount/pkgname).
  • We're starting to roll out software metadata files to all ropensci packages via the Codemeta initiative, see https://github.com/ropensci/codemetar/#codemetar for how to include it in your package, after installing the package - should be easy as running codemetar::write_codemeta() in the root of your package.
  • Activate Zenodo watching the repo
  • Tag and create a release so as to create a Zenodo version and DOI
  • Submit to JOSS using the Zenodo DOI. We will tag it for expedited review.

Should you want to acknowledge your reviewers in your package DESCRIPTION, you can do so by making them "rev"-type contributors in the Authors@R field (with their consent). More info on this here.

Welcome aboard! We'd also love a blog post about your package, either a short-form intro to it (https://ropensci.org/tech-notes/) or long-form post with more narrative about its development. (https://ropensci.org/blog/). If you are interested, @stefaniebutland will be in touch about content and timing.

We've started putting together a gitbook with our best practice and tips, this chapter starts the 3d section that's about guidance for after onboarding. Please tell us what could be improved, the corresponding repo is here.

@eebrown
Copy link
Member Author

eebrown commented Feb 25, 2019

Thanks very much @melvidoni. This has been a fun and useful process (with great international collaboration!) I will make the needed badge changes and also would like to increment the package version to 0.2.0 given the number of changes implemented during the review. Do I then proceed with the JOSS submission at that point?

I would love to acknowledge the reviewers as rev contributors in the description. Please let me know if you consent to this @bhive01 and @jonclayden. (And please let me know your ORCID if you would like that included).

@melvidoni
Copy link
Contributor

melvidoni commented Feb 25, 2019

@eebrown Doing the JOSS submission is not mandatory. That is only if you want, in which case, it should have been stated in the opening post.

Also, you tagged @jonclayden incorrectly, so please Jon, read the comment before this!

@eebrown
Copy link
Member Author

eebrown commented Feb 25, 2019

Hi @melvidoni, sorry for the confusion--I think I did indicate that I wanted to submit to JOSS in the original post (it is checked) and I did included a paper.md and bib file for this purpose. Please let me know if I missed something so I can fix it if possible.

@jonclayden
Copy link

jonclayden commented Feb 25, 2019

I reviewed the package on the basis of it being submitted for JOSS, and have ticked off those points in my review. Many thanks @eebrown for the offer of the acknowledgment, which I'm more than happy to consent to. My ORCID is 0000-0002-6608-0619.

@melvidoni
Copy link
Contributor

melvidoni commented Feb 25, 2019

@eebrown I don't know why, but I don't see the box ticked on the opening post for the issue. Nevermind this, I apologize for the misunderstanding.

I have added three additional to-dos on the approval comment and I need you to tackle those as well. Do everything before submitting to JOSS, please. You should have already received an invitation for the team.

@bhive01
Copy link

bhive01 commented Feb 25, 2019

I also consent to the inclusion as a reviewer. My orchid ID is 0000-0003-2576-4544.
Apologies for the lack of JOSS stuff in my review. I was less sure about that part. Sounds like Melina has you on track @eebrown.

@eebrown
Copy link
Member Author

eebrown commented Feb 25, 2019

Hi @melvidoni, I've done all the post-review steps up to the Zenodo part. In Zenodo, I see the repositories under my name but not the ropensci tacmagic repository, so I have not been able to turn on tracking for it yet. I am not sure whether that's something I would need admin status for, if I don't already have that. Thanks for your help.

@stefaniebutland
Copy link
Member

stefaniebutland commented Feb 25, 2019

@eebrown I'm rOpenSci's Community Manager, here to say we would love to feature a post about tacmagic on our blog. This might bring your work to a larger audience.

This link will give you many examples of blog posts by authors of peer-reviewed packages so you can get an idea of the style and length you prefer: https://ropensci.org/tags/software-peer-review/. Here are some technical and editorial guidelines for contributing a blog post: https://github.com/ropensci/roweb2#contributing-a-blog-post.

Please let me know if you're interested and we can discuss a deadline. Happy to answer any questions.

@melvidoni
Copy link
Contributor

melvidoni commented Feb 26, 2019

@eebrown did you transfer tacmagic before getting the Zenodo DOI? In that case, you need to wait until you have admin access.

Please, remember to tick the boxes on my approval comment each time you have completed something.

@eebrown
Copy link
Member Author

eebrown commented Feb 26, 2019

Thanks very much @melvidoni; it works now, and a DOI has been assigned.

DOI

On the other hand I don't seem to be able to edit your comment, so I've copied the post-approval checklist below. Essentially I am at the last step, about to submit to JOSS. As I would also like to submit to CRAN, does the order of submitting to JOSS/CRAN matter from your perspective?

To-dos:

  • Transfer the repo to rOpenSci's "ropensci" GitHub organization under "Settings" in your repo. I have invited you to a team that should allow you to do so. You'll be made admin once you do.
  • Add the rOpenSci footer to the bottom of your README
[![ropensci_footer](https://ropensci.org/public_images/ropensci_footer.png)](https://ropensci.org)
  • Fix any links in badges for CI and coverage to point to the ropensci URL. We no longer transfer Appveyor projects to ropensci Appveyor account so after transfer of your repo to rOpenSci's "ropensci" GitHub organization the badge should be [![AppVeyor Build Status](https://ci.appveyor.com/api/projects/status/github/ropensci/pkgname?branch=master&svg=true)](https://ci.appveyor.com/project/individualaccount/pkgname).
  • We're starting to roll out software metadata files to all ropensci packages via the Codemeta initiative, see https://github.com/ropensci/codemetar/#codemetar for how to include it in your package, after installing the package - should be easy as running codemetar::write_codemeta() in the root of your package.
  • Activate Zenodo watching the repo
  • Tag and create a release so as to create a Zenodo version and DOI
  • Submit to JOSS using the Zenodo DOI. We will tag it for expedited review.
  • Reviewers acknowledged in DESCRIPTION

@eebrown
Copy link
Member Author

eebrown commented Feb 26, 2019

Hi @stefaniebutland - thanks for getting in touch! I think a blog post is a great opportunity to get some eyes on the new package. I'd love to connect by email to hear more: eb@ericebrown.com.

@melvidoni
Copy link
Contributor

melvidoni commented Feb 26, 2019

Thanks, @eebrown, now:

  1. Submit package via http://joss.theoj.org/papers/new

  2. Watch for the paper to pop up at http://joss.theoj.org/papers, then add the following comment to the submission thread:

This submission has been accepted to rOpenSci. The review thread can be found at [LINK TO SOFTWARE REVIEW ISSUE]

If everything is OK, I am closing the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants