Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code quality issues #50

Closed
fabian-s opened this issue Aug 24, 2023 · 23 comments
Closed

Code quality issues #50

fabian-s opened this issue Aug 24, 2023 · 23 comments

Comments

@fabian-s
Copy link

  1. goodpractice::gp finds very low test coverage and some inconsistent / non-compliant coding style.
    The former at least should be improved for your JOSS publication

  2. cyclomatic complexity of many of your functions is extremely high, casting doubt on maintainability, legibility and correctness of the implementation.
    maybe try refactoring/getting rid of some of your conditional logic and/or modularize your functions to get cyclomatic complexities closer to reasonable values of like 20-50?

goodpractice::gp("konfound")
Preparing: covr
Preparing: cyclocomp
Preparing: description
Preparing: lintr
Preparing: namespace
Preparing: rcmdcheck
── GP konfound ─────────────────────────────────────────────────────────────────

It is good practice to

  ✖ write unit tests for all functions, and all package code in
    general. 27% of code lines are covered by test cases.

    R/cop_pse_auxiliary.R:2:NA
    R/cop_pse_auxiliary.R:3:NA
    R/cop_pse_auxiliary.R:4:NA
    R/cop_pse_auxiliary.R:6:NA
    R/cop_pse_auxiliary.R:8:NA
    ... and 1745 more lines

  ✖ use '<-' for assignment instead of '='. '<-' is the
    standard, and R users and developers are used it and it is easier
    to read your code for them if you use '<-'.

    R/cop_pse_auxiliary.R:2:10
    R/cop_pse_auxiliary.R:4:13
    R/cop_pse_auxiliary.R:12:10
    R/cop_pse_auxiliary.R:15:9
    R/cop_pse_auxiliary.R:20:9
    ... and 174 more lines

  ✖ avoid long code lines, it is bad for readability. Also,
    many people prefer editor windows that are about 80 characters
    wide. Try make your lines shorter than 80 characters

    R/concord1.R:7:81
    R/cop_pse_auxiliary.R:24:81
    R/cop_pse_auxiliary.R:26:81
    R/cop_pse_auxiliary.R:38:81
    R/cop_pse_auxiliary.R:137:81
    ... and 447 more lines

  ✖ avoid 1:length(...), 1:nrow(...), 1:ncol(...), 1:NROW(...)
    and 1:NCOL(...) expressions. They are error prone and result 1:0 if
    the expression on the right hand side is zero. Use seq_len() or
    seq_along() instead.

    R/mkonfound.R:45:44

  ✖ not import packages as a whole, as this can cause name
    clashes between the imported packages. Instead, import only the
    specific functions you need.
  ✖ avoid 'T' and 'F', as they are just variables which are set
    to the logicals 'TRUE' and 'FALSE' by default, but are not reserved
    words and hence can be overwritten by the user.  Hence, one should
    always use 'TRUE' and 'FALSE' for the logicals.

    R/cop_pse_auxiliary.R:NA:NA
    R/cop_pse_auxiliary.R:NA:NA
    R/cop_pse_auxiliary.R:NA:NA
    R/cop_pse_auxiliary.R:NA:NA
    R/cop_pse_auxiliary.R:NA:NA
    ... and 21 more lines
> cyclocomp::cyclocomp_package("konfound")
                         name cyclocomp
43        test_sensitivity_ln       217  
22           getswitch_fisher        79 
45              tkonfound_fig        74
21            getswitch_chisq        70
44                  tkonfound        70
20                  getswitch        37 
[....]

crossref: openjournals/joss-reviews#5779

@fabian-s
Copy link
Author

Any updates on this yet?

@jrosen48
Copy link
Collaborator

Hi, sorry for the delay - I'm on parental leave for another two weeks but making some progress with collaborators in some free moments. I will plan to update again in around three weeks or less, if that sounds okay.

@jrosen48
Copy link
Collaborator

@fabian-s do you know of any clever way to identify the functions imported from other packages? I've tried pkgnet and lintr to no avail. Of course, a manual approach may be best - planning to do that.

I think we've addressed the other issues; will detail how and in which commits shortly.

@fabian-s
Copy link
Author

sounds dumb but should work:

  • remove all Imports/Depends/Suggests from the DESCRIPTION
  • run R CMD CHECK
  • note all the "no visible global function definition for" errors that pop up and add import statements for those

alternatively, run codetools::findGlobals on each of your package's functions.

@jrosen48
Copy link
Collaborator

For some reason, after removing imports and suggests, rebuilding, and checking, I am only seeing:

Namespace dependencies missing from DESCRIPTION Imports/Depends entries:
  'dplyr', 'rlang'

When there are far more.

@fabian-s
Copy link
Author

fabian-s commented Nov 30, 2023

yeah, sorry about that, -- need to delete the NAMESPACE directives as well. if i do so and run

rcmdcheck::rcmdcheck(".", build_args = "--no-build-vignettes")

I see:

✔  checking startup messages can be suppressed (341ms)
W  checking dependencies in R code ...
   '::' or ':::' imports not declared from:
     ‘broom’ ‘broom.mixed’ ‘crayon’ ‘dplyr’ ‘ggplot2’ ‘ggrepel’ ‘lavaan’
     ‘lme4’ ‘margins’ ‘pbkrtest’ ‘purrr’ ‘rlang’ ‘tidyr’
✔  checking S3 generic/method consistency ...
✔  checking replacement functions ...
✔  checking foreign function calls ...
N  checking R code for possible problems (4.8s)
   chisq_p: no visible global function definition for ‘chisq.test’
   chisq_value: no visible global function definition for ‘chisq.test’
   fisher_oddsratio: no visible global function definition for
     ‘fisher.test’
   fisher_p: no visible global function definition for ‘fisher.test’
   konfound_glm: no visible binding for global variable ‘.data’
   konfound_lm: no visible binding for global variable ‘.data’
   konfound_lmer: no visible global function definition for ‘filter’
   konfound_lmer: no visible binding for global variable ‘.data’
   konfound_lmer: no visible global function definition for ‘bind_cols’
   mkonfound: no visible global function definition for ‘enquo’
   mkonfound: no visible global function definition for ‘pull’
   mkonfound: no visible global function definition for ‘select’
   test_cop: no visible binding for global variable ‘ModelLabel’
   test_cop: no visible binding for global variable ‘coef_X’
   Undefined global functions or variables:
     .data ModelLabel bind_cols chisq.test coef_X enquo filter fisher.test
     pull select
   Consider adding
     importFrom("stats", "chisq.test", "filter", "fisher.test")
   to your NAMESPACE file.
✔  checking Rd files ...
✔  checking Rd metadata ...
✔  checking Rd cross-references ...
✔  checking for missing documentation entries ...
W  checking for code/documentation mismatches ...
   Codoc mismatches from documentation object 'tkonfound_fig':
   tkonfound_fig
     Code: function(a, b, c, d, thr_p = 0.05, switch_trm = TRUE, test =
                    "fisher", replace = "control")
     Docs: function(a, b, c, d, thr_p = 0.05, switch_trm = T, test =
                    "fisher", replace = "control")
     Mismatches in argument default values:
       Name: 'switch_trm' Code: TRUE Docs: T
   
✔  checking Rd \usage sections (662ms)
✔  checking Rd contents ...
W  checking for unstated dependencies in examples ...
   '::' or ':::' imports not declared from:
     ‘forcats’ ‘lme4’
   'library' or 'require' calls not declared from:
     ‘forcats’ ‘lme4’

proceed from there, I guess

@fabian-s
Copy link
Author

the functions you import via "::" or ":::" are:

fabians@fabians-T490s:~/Downloads/konfound-master/R$ grep "::" *.R -n 1
cop_pse_auxiliary.R:117:            lavaan::sem(model, 
cop_pse_auxiliary.R:132:        fit <- lavaan::sem(model,
cop_pse_auxiliary.R:136:        R2 <- (sdy^2 - lavaan::parameterEstimates(fit)[4,]$est) / sdy^2
cop_pse_auxiliary.R:137:        betaX <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta1',]$est
cop_pse_auxiliary.R:138:        seX <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta1',]$se
cop_pse_auxiliary.R:139:        betaZ <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta2',]$est
cop_pse_auxiliary.R:140:        seZ <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta2',]$se
cop_pse_auxiliary.R:141:        betaCV <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta3',]$est
cop_pse_auxiliary.R:142:        seCV <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta3',]$se
cop_pse_auxiliary.R:155:            lavaan::sem(model, 
cop_pse_auxiliary.R:171:        fit <- lavaan::sem(model,
cop_pse_auxiliary.R:174:        std_R2 <- 1 - lavaan::parameterEstimates(fit)[4,]$est
cop_pse_auxiliary.R:175:        std_betaX <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta1',]$est
cop_pse_auxiliary.R:176:        std_seX <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta1',]$se
cop_pse_auxiliary.R:177:        std_betaZ <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta2',]$est
cop_pse_auxiliary.R:178:        std_seZ <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta2',]$se
cop_pse_auxiliary.R:179:        std_betaCV <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta3',]$est
cop_pse_auxiliary.R:180:        std_seCV <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta3',]$se
cop_pse_auxiliary.R:249:            lavaan::sem(model, 
cop_pse_auxiliary.R:264:        fit <- lavaan::sem(model,
cop_pse_auxiliary.R:268:        R2 <- (sdy^2 - lavaan::parameterEstimates(fit)[3,]$est) / sdy^2
cop_pse_auxiliary.R:269:        betaX <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta1',]$est
cop_pse_auxiliary.R:270:        seX <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta1',]$se
cop_pse_auxiliary.R:271:        betaZ <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta2',]$est
cop_pse_auxiliary.R:272:        seZ <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta2',]$se
cop_pse_auxiliary.R:296:            lavaan::sem(model, 
cop_pse_auxiliary.R:311:        fit <- lavaan::sem(model,
cop_pse_auxiliary.R:315:        R2 <- (sdy^2 - lavaan::parameterEstimates(fit)[2,]$est) / sdy^2
cop_pse_auxiliary.R:316:        betaX <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta1',]$est
cop_pse_auxiliary.R:317:        seX <- lavaan::parameterEstimates(fit)[lavaan::parameterEstimates(fit)$label == 'beta1',]$se
helper_output_dataframe.R:5:    df <- dplyr::tibble(
helper_output_dataframe.R:17:    df <- dplyr::tibble(
helper_output_print.R:5:    cat(crayon::bold("Robustness of Inference to Replacement (RIR):\n"))
helper_output_print.R:29:    cat(crayon::underline("Citation:"), "Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. (2013).")
helper_output_print.R:35:    cat(crayon::italic("Education, Evaluation and Policy Analysis, 35"), "437-460.")
helper_output_print.R:39:    cat(crayon::bold("Impact Threshold for a Confounding Variable:\n"))
helper_output_print.R:94:    cat(crayon::underline("Citation:"))
helper_output_print.R:98:    cat("inference of a regression coefficient.", crayon::italic("Sociological Methods and Research, 29"), "(2), 147-194")
helper_output_table.R:6:  model_output <- broom::tidy(model_object) # tidying output
helper_output_table.R:15:    cor_df <- as.data.frame(stats::cor(d))
helper_output_table.R:19:  model_output <- purrr::modify_if(model_output, is.numeric, round, digits = 3)
helper_plot_correlation.R:14:  p <- ggplot2::ggplot(d, ggplot2::aes_string(x = "x", y = "y")) +
helper_plot_correlation.R:17:    ggplot2::geom_segment(ggplot2::aes(y = .1), xend = 0, yend = .9, arrow = ggplot2::arrow(), size = 2.5, color = "#A6CEE3") + # straight up
helper_plot_correlation.R:18:    ggplot2::geom_segment(ggplot2::aes(x = .1), xend = 1, yend = .9, arrow = ggplot2::arrow(), size = 2.5, color = "#A6CEE3") + # hypotenuse
helper_plot_correlation.R:19:    ggplot2::geom_segment(ggplot2::aes(x = .15, y = 1), xend = .9, yend = 1, arrow = ggplot2::arrow(), size = 2.5, color = "#A6CEE3") + # straight across
helper_plot_correlation.R:22:    ggplot2::annotate("text", x = 0, y = 0, label = paste0("Confounding\nVariable"), fontface = 3) +
helper_plot_correlation.R:23:    ggplot2::annotate("text", x = 0, y = 1, label = paste0("Predictor of Interest"), fontface = 3) +
helper_plot_correlation.R:24:    ggplot2::annotate("text", x = 1, y = 1, label = paste0("Outcome"), fontface = 3) +
helper_plot_correlation.R:27:    # ggplot2::geom_segment(ggplot2::aes(x = .05, y = .25), xend = .275, yend = .65, arrow = ggplot2::arrow(), size = 2.5, color = "#1F78B4") + # straight across
helper_plot_correlation.R:28:    # ggplot2::geom_segment(ggplot2::aes(x = .175, y = .15), xend = .3, yend = .625, arrow = ggplot2::arrow(), size = 2.5, color = "#1F78B4") + # straight across
helper_plot_correlation.R:29:    ggplot2::geom_curve(ggplot2::aes(x = .04, y = .325, xend = .35, yend = .825), curvature = -.35, size = 2.5, color = "#1F78B4", arrow = ggplot2::arrow()) +
helper_plot_correlation.R:30:    ggplot2::geom_curve(ggplot2::aes(x = .225, y = .23, xend = .4, yend = .8), curvature = .35, size = 2.5, color = "#1F78B4", arrow = ggplot2::arrow()) +
helper_plot_correlation.R:32:    ggplot2::geom_segment(ggplot2::aes(x = .37, y = .81), xend = .465, yend = .925, arrow = ggplot2::arrow(), size = 2.5, color = "#1F78B4") + # straight across
helper_plot_correlation.R:35:    ggplot2::annotate("text", x = -.125, y = .5, label = paste0("Rx.cv | Z =\n ", r_con), fontface = 1) +
helper_plot_correlation.R:36:    ggplot2::annotate("text", x = .575, y = .35, label = paste0("Ry.cv | Z =\n ", r_con), fontface = 1) +
helper_plot_correlation.R:37:    ggplot2::annotate("text", x = .25, y = .525, label = paste0("Rx.cv | Z * Ry.cv | Z =\n ", round(r_con^2, 3)), fontface = 1) +
helper_plot_correlation.R:40:    ggplot2::xlim(-.15, 1.1) +
helper_plot_correlation.R:41:    ggplot2::ylim(-.05, 1) +
helper_plot_correlation.R:42:    ggplot2::theme_void() +
helper_plot_correlation.R:43:    ggplot2::ggtitle(the_title)
helper_plot_threshold.R:8:    dd <- dplyr::tibble(
helper_plot_threshold.R:13:    dd <- dplyr::mutate(dd, `Above Threshold` = est_eff - beta_threshold)
helper_plot_threshold.R:14:    dd <- dplyr::rename(dd, `Below Threshold` = beta_threshold)
helper_plot_threshold.R:16:    dd <- dplyr::select(dd, -est_eff)
helper_plot_threshold.R:17:    dd <- tidyr::gather(dd, key, val)
helper_plot_threshold.R:18:    dd <- dplyr::mutate(dd, inference = "group")
helper_plot_threshold.R:20:    y_thresh <- dplyr::filter(dd, key == "Below Threshold")
helper_plot_threshold.R:21:    y_thresh <- dplyr::pull(dplyr::select(y_thresh, val))
helper_plot_threshold.R:29:    dd <- dplyr::tibble(est_eff = est_eff, beta_threshold = beta_threshold)
helper_plot_threshold.R:30:    dd <- dplyr::mutate(dd, `Above Estimated Effect, Below Threshold` = abs(est_eff - beta_threshold))
helper_plot_threshold.R:31:    dd <- dplyr::mutate(dd, `Below Threshold` = est_eff)
helper_plot_threshold.R:32:    dd <- dplyr::select(dd, -beta_threshold)
helper_plot_threshold.R:34:    dd <- dplyr::select(dd, -est_eff)
helper_plot_threshold.R:35:    dd <- tidyr::gather(dd, key, val)
helper_plot_threshold.R:36:    dd <- dplyr::mutate(dd, inference = "group")
helper_plot_threshold.R:49:  p <- ggplot2::ggplot(dd, ggplot2::aes(x = inference, y = val, fill = key)) +
helper_plot_threshold.R:50:    ggplot2::geom_col(position = "stack") +
helper_plot_threshold.R:52:    ggplot2::geom_hline(yintercept = est_eff, color = "black") +
helper_plot_threshold.R:53:    ggplot2::annotate("text", x = 1, y = effect_text, label = "Estimated Effect") +
helper_plot_threshold.R:55:    ggplot2::geom_hline(yintercept = y_thresh, color = "red") +
helper_plot_threshold.R:56:    ggplot2::annotate("text", x = 1, y = y_thresh_text, label = "Threshold") +
helper_plot_threshold.R:57:    # ggplot2::geom_text(aes(label = "Effect"), vjust = -.5) + this is discussed here: https://github.com/jrosen48/konfound/issues/5
helper_plot_threshold.R:59:    ggplot2::scale_fill_manual("", values = cols) +
helper_plot_threshold.R:60:    ggplot2::theme_bw() +
helper_plot_threshold.R:61:    ggplot2::theme(axis.text.x = ggplot2::element_blank(), axis.ticks = ggplot2::element_blank()) +
helper_plot_threshold.R:62:    ggplot2::xlab(NULL) +
helper_plot_threshold.R:63:    ggplot2::ylab("Effect (abs. value)") +
helper_plot_threshold.R:64:    ggplot2::theme(legend.position = "top")
konfound-glm-dichotomous.R:5:  tidy_output <- broom::tidy(model_object) # tidying output
konfound-glm-dichotomous.R:6:  glance_output <- broom::glance(model_object)
konfound-glm.R:4:  tidy_output <- broom::tidy(model_object) # tidying output
konfound-glm.R:5:  glance_output <- broom::glance(model_object)
konfound-glm.R:11:    coef_df$est_eff <- suppressWarnings(summary(margins::margins(model_object))$AME[names(summary(margins::margins(model_object))$AME) == tested_variable_string])
konfound-glm.R:15:  est_eff <- suppressWarnings(summary(margins::margins(model_object))$AME[names(summary(margins::margins(model_object))$AME) == tested_variable_string])
konfound-glm.R:40:    term_names <- dplyr::select(tidy_output, var_name = .data$term) # remove the first row for intercept
konfound-glm.R:41:    term_names <- dplyr::filter(term_names, .data$var_name != "(Intercept)")
konfound-glm.R:42:    return(dplyr::bind_cols(term_names, o))
konfound-lmer.R:4:  L <- diag(rep(1, length(lme4::fixef(model_object))))
konfound-lmer.R:6:  out <- suppressWarnings(purrr::map_dbl(L, pbkrtest::get_Lb_ddf, object = model_object))
konfound-lmer.R:7:  names(out) <- names(lme4::fixef(model_object))
konfound-lmer.R:12:  tidy_output <- broom.mixed::tidy(model_object) # tidying output
konfound-lm.R:4:  tidy_output <- broom::tidy(model_object) # tidying output
konfound-lm.R:5:  glance_output <- broom::glance(model_object)
konfound-lm.R:37:    term_names <- dplyr::select(tidy_output, var_name = .data$term) # remove the first row for intercept
konfound-lm.R:38:    term_names <- dplyr::filter(term_names, .data$var_name != "(Intercept)")
konfound-lm.R:39:    return(dplyr::bind_cols(term_names, o))
konfound.R:20:#'   d <- forcats::gss_cat
konfound.R:31:#'   m3 <- fm1 <- lme4::lmer(Reaction ~ Days + (1 | Subject), sleepstudy)
konfound.R:55:    stop("konfound() is currently implemented for models estimated with lm(), glm(), and lme4::lmer(); consider using pkonfound() instead")
konfound.R:61:  tested_variable_enquo <- rlang::enquo(tested_variable) # dealing with non-standard evaluation (so unquoted names for tested_variable can be used)
konfound.R:62:  tested_variable_string <- rlang::quo_name(tested_variable_enquo)
mkonfound.R:31:  results_df <- suppressWarnings(purrr::map2_dfr(.x = t_vec, .y = df_vec, .f = core_sensitivity_mkonfound))
mkonfound.R:34:    results_df$action <- dplyr::case_when(
mkonfound.R:39:    p <- ggplot2::ggplot(results_df, ggplot2::aes_string(x = "pct_bias_to_change_inference", fill = "action")) +
mkonfound.R:40:      ggplot2::geom_histogram() +
mkonfound.R:41:      ggplot2::scale_fill_manual("", values = c("#1F78B4", "#A6CEE3")) +
mkonfound.R:42:      ggplot2::theme_bw() +
mkonfound.R:43:      ggplot2::ggtitle("Histogram of Percent Bias") +
mkonfound.R:44:      ggplot2::facet_grid(~action) +
mkonfound.R:45:      ggplot2::scale_y_continuous(breaks = seq_len(nrow(results_df))) +
mkonfound.R:46:      ggplot2::theme(legend.position = "none") +
mkonfound.R:47:      ggplot2::ylab("Count") +
mkonfound.R:48:      ggplot2::xlab("Percent Bias")
mkonfound.R:57:  critical_t <- stats::qt(1 - (alpha / tails), df)
mkonfound.R:88:  out <- dplyr::data_frame(t, df, action, inference, pct_bias, itcv, r_con)
nonlinear_auxiliary.R:371:thr_t <- stats::qt(1 - thr_p/2, n_obs - 1)*(-1)
nonlinear_auxiliary.R:373:thr_t <- stats::qt(1 - thr_p/2, n_obs - 1)
nonlinear_auxiliary.R:711:    thr_t <- stats::qt(1 - thr_p/2, n_obs - 1)*(-1)
nonlinear_auxiliary.R:713:    thr_t <- stats::qt(1 - thr_p/2, n_obs - 1)
pkonfound.R:57:# my_table <- tibble::tribble(
pkonfound.R:176:    a <- dplyr::pull(two_by_two_table[1, 1])
pkonfound.R:177:    b <- dplyr::pull(two_by_two_table[1, 2])
pkonfound.R:178:    c <- dplyr::pull(two_by_two_table[2, 1])
pkonfound.R:179:    d <- dplyr::pull(two_by_two_table[2, 2])
test_cop.R:231:fig <- ggplot2::ggplot(figTable, ggplot2::aes(x = ModelLabel)) +
test_cop.R:232:    ggplot2::geom_point(ggplot2::aes(y = coef_X, group = cat, shape = cat), color = "blue", size = 3) + 
test_cop.R:233:    ggplot2::scale_shape_manual(values = c(16, 1)) +
test_cop.R:234:    ggplot2::geom_point(ggplot2::aes(y = R2/scale), color = "#7CAE00", shape = 18, size = 4) + 
test_cop.R:236:    ggplot2::geom_line(ggplot2::aes(y = R2/scale, group = cat), linetype = "solid", color = "#7CAE00") + 
test_cop.R:237:    ggplot2::geom_line(ggplot2::aes(y = coef_X, group = cat, linetype = cat), color = "blue") + 
test_cop.R:238:    ggplot2::scale_y_continuous(
test_cop.R:242:      sec.axis = ggplot2::sec_axis(~.* scale, 
test_cop.R:244:    ggplot2::theme(axis.title.x = ggplot2::element_blank(),
test_cop.R:246:          axis.line.y.right = ggplot2::element_line(color = "#7CAE00"),
test_cop.R:247:          axis.title.y.right = ggplot2::element_text(color = "#7CAE00"),
test_cop.R:248:          axis.text.y.right = ggplot2::element_text(color = "#7CAE00"),
test_cop.R:249:          axis.line.y.left = ggplot2::element_line(color = "blue"),
test_cop.R:250:          axis.title.y.left = ggplot2::element_text(color = "blue"),
test_cop.R:251:          axis.text.y.left = ggplot2::element_text(color = "blue"),
test_cop.R:252:          axis.line.x.bottom = ggplot2::element_line(color = "black"),
test_cop.R:253:          axis.text.x.bottom = ggplot2::element_text(color = "black"))
test_cop.R:260:    critical_t <- stats::qt(1 - (alpha / tails), n_obs - n_covariates - 2) * -1
test_cop.R:262:    critical_t <- stats::qt(1 - (alpha / tails), n_obs - n_covariates - 2)
test_sensitivity_ln.R:22:    thr_t <- stats::qt(1 - (alpha / tails), n_obs - n_covariates - 3) * -1
test_sensitivity_ln.R:24:    thr_t <- stats::qt(1 - (alpha / tails), n_obs - n_covariates - 3)
test_sensitivity_ln.R:316:    # cat(crayon::bold("Background Information:"))
test_sensitivity_ln.R:321:    cat(crayon::bold("Conclusion:"))
test_sensitivity_ln.R:323:    cat(crayon::underline("User-entered Table:"))
test_sensitivity_ln.R:344:    cat(crayon::underline("Transfer Table:"))
test_sensitivity_ln.R:353:    cat(crayon::bold("RIR:"))
test_sensitivity.R:18:    cat(rlang::expr_text(substitute(object)), "$", name, "\n", sep = "")
test_sensitivity.R:43:    critical_t <- stats::qt(1 - (alpha / tails), n_obs - n_covariates - 3) * -1
test_sensitivity.R:46:    critical_t <- stats::qt(1 - (alpha / tails), n_obs - n_covariates - 3)
test_sensitivity.R:109:    konfound_output <- purrr::map(
tkonfound_fig.R:233:fig1 <- ggplot2::ggplot(meta, ggplot2::aes_string(x="RIR", y="pdif"))+
tkonfound_fig.R:234:  ggplot2::geom_line(ggplot2::aes_string(y="pdif"), size = 1) +
tkonfound_fig.R:235:  ggplot2::geom_point(ggplot2::aes_string(y="pdif", shape = "current",fill = "sigpoint"))+
tkonfound_fig.R:236:  ggplot2::scale_fill_manual(values=fillcol)+
tkonfound_fig.R:237:  ggplot2::scale_shape_manual(values=pointshape)+
tkonfound_fig.R:238:  ggrepel::geom_label_repel(ggplot2::aes_string(label="currentlabel"))+
tkonfound_fig.R:239:  ggplot2::geom_hline(yintercept = pos_thr_pdif, linetype = "dashed", color="green4", size = 1)+
tkonfound_fig.R:240:  ggplot2::geom_hline(yintercept = neg_thr_pdif, linetype = "dashed", color="red", size = 1)+
tkonfound_fig.R:241:  ggplot2::scale_y_continuous(name="Difference in probability of successful outcome (treatment - control)")+
tkonfound_fig.R:242:  ggplot2::scale_x_continuous(name="RIR (Fragility)", 
tkonfound_fig.R:247:  ggplot2::theme(#axis.title = ggplot2::element_text(size = 15),
tkonfound_fig.R:248:        #axis.text= ggplot2::element_text(size = 14),
tkonfound_fig.R:249:        panel.grid.major = ggplot2::element_blank(), 
tkonfound_fig.R:250:        panel.grid.minor = ggplot2::element_blank(),
tkonfound_fig.R:251:        panel.background = ggplot2::element_blank(), 
tkonfound_fig.R:252:        axis.line = ggplot2::element_line(colour = "black"),
tkonfound_fig.R:278:fig2 <- ggplot2::ggplot(zoom, ggplot2::aes_string(x="RIR",y="pdif"))+
tkonfound_fig.R:279:  ggplot2::geom_line(ggplot2::aes_string(y="pdif"), size = 1) +
tkonfound_fig.R:280:  ggplot2::geom_point(ggplot2::aes_string(y="pdif", shape = "current",fill = "sigpoint"), 
tkonfound_fig.R:282:  ggrepel::geom_label_repel(ggplot2::aes_string(label="label"))+
tkonfound_fig.R:283:  ggplot2::scale_fill_manual(values=fillcol)+
tkonfound_fig.R:284:  ggplot2::scale_shape_manual(values=pointshape)+
tkonfound_fig.R:285:  ggplot2::scale_y_continuous(name="Difference in probability of successful outcome (treatment - control)")+
tkonfound_fig.R:286:  ggplot2::scale_x_continuous(name="RIR (Fragility)", 
tkonfound_fig.R:291:  ggplot2::theme(panel.grid.major = ggplot2::element_blank(), 
tkonfound_fig.R:292:                 panel.grid.minor = ggplot2::element_blank(),
tkonfound_fig.R:293:                 panel.background = ggplot2::element_blank(), 
tkonfound_fig.R:294:                 axis.line = ggplot2::element_line(colour = "black"),
tkonfound_fig.R:298:  fig2 <- fig2 + ggplot2::geom_hline(yintercept = pos_thr_pdif, linetype = "dashed", color="green4", size = 1)
tkonfound_fig.R:302:  fig2 <- fig2 + ggplot2::geom_hline(yintercept = neg_thr_pdif, linetype = "dashed", color="red", size = 1)
tkonfound_fig.R:316:#    thr_i_t <- stats::qt(1 - thr_p/2, size - 1) 
tkonfound_fig.R:320:#  fig3 <- ggplot2::ggplot(meta3, aes(x=nobs, y=RISperc))+
tkonfound_fig.R:324:#    scale_x_continuous(name="Sample Size", labels=scales::comma)+
tkonfound.R:213:    cat(crayon::bold("Background Information:"))
tkonfound.R:226:    cat(crayon::bold("Conclusion:"))
tkonfound.R:237:    cat(crayon::underline("User-entered Table:"))
tkonfound.R:244:    cat(crayon::underline("Transfer Table:"))
tkonfound.R:248:    cat(crayon::bold("RIR:"))
zzz.R:2:if (getRversion() >= "2.15.1") utils::globalVariables(c("inference", "key", "replace_null_cases", "percent_bias", "val"))
zzz.R:14:  utils::browseURL("http://konfound-it.com")
zzz.R:18:# if (getRversion() >= "2.15.1") utils::globalVariables(c("itcv", "term", "unstd_beta1", "var_name", "x", "y"))

@wwang93
Copy link
Collaborator

wwang93 commented Jan 8, 2024

@fabian-s Thanks to your suggestion, we roxygen2 commented the dependencies of each function, regenerated the NAMESPACE file, and updated the Description file, and in the end it basically passed the R CMD check!

rcmdcheck::rcmdcheck(".")
── R CMD build ─────────────────────────────────────────────────────────────────────────────────────────────────
✔ checking for file ‘.../DESCRIPTION’ ...
─ preparing ‘konfound’: (393ms)
✔ checking DESCRIPTION meta-information ...
─ installing the package to build vignettes
✔ creating vignettes (7.7s)
─ checking for LF line-endings in source and make files and shell scripts
─ checking for empty or unneeded directories
─ building ‘konfound_0.4.0.tar.gz’

── R CMD check ─────────────────────────────────────────────────────────────────────────────────────────────────
─ using log directory ‘/private/var/folders/hd/3_rrkpw50r10pbd7p9pxm_s40000gn/T/RtmpHIclFs/file119002eec5a2f/konfound.Rcheck’
─ using R version 4.3.0 (2023-04-21)
─ using platform: aarch64-apple-darwin20 (64-bit)
─ R was compiled by
Apple clang version 14.0.0 (clang-1400.0.29.202)
GNU Fortran (GCC) 12.2.0
─ running under: macOS Big Sur 11.5.1
─ using session charset: UTF-8
✔ checking for file ‘konfound/DESCRIPTION’ ...
─ checking extension type ... Package
─ this is package ‘konfound’ version ‘0.4.0’
─ package encoding: UTF-8
✔ checking package namespace information
✔ checking package dependencies (1.1s)
✔ checking if this is a source package ...
✔ checking if there is a namespace
✔ checking for executable files ...
N checking for hidden files and directories ...
Found the following hidden files and directories:
joss-paper/.github
These were most likely included in error. See section ‘Package
structure’ in the ‘Writing R Extensions’ manual.
✔ checking for portable file names
✔ checking for sufficient/correct file permissions
✔ checking whether package ‘konfound’ can be installed (4.7s)
✔ checking installed package size
✔ checking package directory
✔ checking ‘build’ directory
✔ checking DESCRIPTION meta-information ...
✔ checking top-level files
✔ checking for left-over files
✔ checking index information ...
✔ checking package subdirectories ...
✔ checking R files for non-ASCII characters ...
✔ checking R files for syntax errors ...
✔ checking whether the package can be loaded (1.1s)
✔ checking whether the package can be loaded with stated dependencies (947ms)
✔ checking whether the package can be unloaded cleanly (915ms)
✔ checking whether the namespace can be loaded with stated dependencies (906ms)
✔ checking whether the namespace can be unloaded cleanly (1.1s)
✔ checking startup messages can be suppressed (2.1s)
✔ checking dependencies in R code (1s)
✔ checking S3 generic/method consistency (1.1s)
✔ checking replacement functions (910ms)
✔ checking foreign function calls (1s)
✔ checking R code for possible problems (4.1s)
✔ checking Rd files ...
✔ checking Rd metadata ...
✔ checking Rd cross-references ...
✔ checking for missing documentation entries (933ms)
✔ checking for code/documentation mismatches (2.8s)
✔ checking Rd \usage sections (1.2s)
✔ checking Rd contents ...
✔ checking for unstated dependencies in examples ...
✔ checking contents of ‘data’ directory
✔ checking data for non-ASCII characters ...
✔ checking LazyData
✔ checking data for ASCII and uncompressed saves ...
✔ checking installed files from ‘inst/doc’
✔ checking files in ‘vignettes’ ...
✔ checking examples (3.5s)
✔ checking for unstated dependencies in ‘tests’ ...
─ checking tests ...
✔ Running ‘testthat.R’ (1.7s)
✔ checking for unstated dependencies in vignettes ...
✔ checking package vignettes in ‘inst/doc’ ...
─ checking running R code from vignettes
‘introduction-to-konfound.Rmd’ using ‘UTF-8’... OK
NONE
✔ checking re-building of vignette outputs (3.5s)
✔ checking PDF version of manual (2.7s)

See
‘/private/var/folders/hd/3_rrkpw50r10pbd7p9pxm_s40000gn/T/RtmpHIclFs/file119002eec5a2f/konfound.Rcheck/00check.log’
for details.

── R CMD check results ───────────────────────────────────────────────────────────────────── konfound 0.4.0 ────
Duration: 39.6s

❯ checking for hidden files and directories ... NOTE
Found the following hidden files and directories:
joss-paper/.github
These were most likely included in error. See section ‘Package
structure’ in the ‘Writing R Extensions’ manual.

0 errors ✔ | 0 warnings ✔ | 1 note ✖

@fabian-s
Copy link
Author

fabian-s commented Jan 9, 2024

not the case for me on commit 78ec6f9, I see

==> devtools::check(document = FALSE)

══ Building ═══════════════════════════════════════════════════════════
Setting env vars:
• CFLAGS    : -Wall -pedantic -fdiagnostics-color=always
• CXXFLAGS  : -Wall -pedantic -fdiagnostics-color=always
• CXX11FLAGS: -Wall -pedantic -fdiagnostics-color=always
• CXX14FLAGS: -Wall -pedantic -fdiagnostics-color=always
• CXX17FLAGS: -Wall -pedantic -fdiagnostics-color=always
• CXX20FLAGS: -Wall -pedantic -fdiagnostics-color=always
── R CMD build ────────────────────────────────────────────────────────
✔  checking for file ‘/home/fabians/reviews/JOSS/konfound/DESCRIPTION’ ...
─  preparing ‘konfound’:
✔  checking DESCRIPTION meta-information ...
─  installing the package to build vignettes
✔  creating vignettes (8.4s)
─  checking for LF line-endings in source and make files and shell scripts
─  checking for empty or unneeded directories
─  building ‘konfound_0.4.0.tar.gz’
   
══ Checking ═══════════════════════════════════════════════════════════
Setting env vars:
• _R_CHECK_CRAN_INCOMING_USE_ASPELL_           : TRUE
• _R_CHECK_CRAN_INCOMING_REMOTE_               : FALSE
• _R_CHECK_CRAN_INCOMING_                      : FALSE
• _R_CHECK_FORCE_SUGGESTS_                     : FALSE
• _R_CHECK_PACKAGES_USED_IGNORE_UNUSED_IMPORTS_: FALSE
• NOT_CRAN                                     : true
── R CMD check ────────────────────────────────────────────────────────
─  using log directory ‘/home/fabians/reviews/JOSS/konfound.Rcheck’
─  using R version 4.3.2 (2023-10-31)
─  using platform: x86_64-pc-linux-gnu (64-bit)
─  R was compiled by
       gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
       GNU Fortran (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
─  running under: Linux Mint 20
─  using session charset: UTF-8
─  using options ‘--no-manual --as-cran’
✔  checking for file ‘konfound/DESCRIPTION’
─  checking extension type ... Package
─  this is package ‘konfound’ version ‘0.4.0’
─  package encoding: UTF-8
✔  checking package namespace information
✔  checking package dependencies (1.3s)
✔  checking if this is a source package
✔  checking if there is a namespace
✔  checking for executable files ...
N  checking for hidden files and directories
   Found the following hidden files and directories:
     joss-paper/.github
   These were most likely included in error. See section ‘Package
   structure’ in the ‘Writing R Extensions’ manual.
✔  checking for portable file names ...
✔  checking for sufficient/correct file permissions
✔  checking serialization versions
✔  checking whether package ‘konfound’ can be installed (3.9s)
✔  checking installed package size ...
✔  checking package directory
✔  checking for future file timestamps ...
✔  checking ‘build’ directory
✔  checking DESCRIPTION meta-information ...
N  checking top-level files
   Non-standard files/directories found at top level:
     ‘10.21105.joss.05779.pdf’ ‘joss-paper’
✔  checking for left-over files
✔  checking index information ...
✔  checking package subdirectories ...
✔  checking R files for non-ASCII characters ...
✔  checking R files for syntax errors ...
✔  checking whether the package can be loaded (480ms)
✔  checking whether the package can be loaded with stated dependencies (382ms)
✔  checking whether the package can be unloaded cleanly (382ms)
✔  checking whether the namespace can be loaded with stated dependencies (488ms)
✔  checking whether the namespace can be unloaded cleanly (576ms)
✔  checking loading without being on the library search path (568ms)
✔  checking startup messages can be suppressed (1s)
N  checking dependencies in R code (2.1s)
   Namespaces in Imports field not imported from:
     ‘mice’ ‘tibble’
     All declared Imports should be used.
✔  checking S3 generic/method consistency (555ms)
✔  checking replacement functions (462ms)
✔  checking foreign function calls (692ms)
─  checking R code for possible problems ... [21s/11s] NOTE (11.1s)
   test_cop: no visible binding for global variable ‘ModelLabel’
   test_cop: no visible binding for global variable ‘coef_X’
   Undefined global functions or variables:
     ModelLabel coef_X
   
   Found if() conditions comparing class() to string:
   File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cov) == "lavaan") ...
   File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cor) == "lavaan") ...
   File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cor) == "lavaan" && class(flag_cov) == "lavaan") ...
   File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cov) == "lavaan") ...
   File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cov) == "lavaan") ...
   File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cov) == "lavaan") ...
   File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cov) == "lavaan") ...
   Use inherits() (or maybe is()) instead.
✔  checking Rd files ...
✔  checking Rd metadata ...
✔  checking Rd line widths ...
✔  checking Rd cross-references ...
✔  checking for missing documentation entries (495ms)
W  checking for code/documentation mismatches (748ms)
   Codoc mismatches from documentation object 'tkonfound_fig':
   tkonfound_fig
     Code: function(a, b, c, d, thr_p = 0.05, switch_trm = TRUE, test =
                    "fisher", replace = "control")
     Docs: function(a, b, c, d, thr_p = 0.05, switch_trm = T, test =
                    "fisher", replace = "control")
     Mismatches in argument default values:
       Name: 'switch_trm' Code: TRUE Docs: T
   
✔  checking Rd \usage sections (2s)
✔  checking Rd contents ...
✔  checking for unstated dependencies in examples ...
✔  checking contents of ‘data’ directory ...
✔  checking data for non-ASCII characters ...
✔  checking LazyData
✔  checking data for ASCII and uncompressed saves ...
✔  checking installed files from ‘inst/doc’ ...
✔  checking files in ‘vignettes’ ...
E  checking examples (1.5s)
   Running examples in ‘konfound-Ex.R’ failed
   The error most likely occurred in:
   
   > base::assign(".ptime", proc.time(), pos = "CheckExEnv")
   > ### Name: konfound
   > ### Title: Perform sensitivity analysis on fitted models
   > ### Aliases: konfound
   > 
   > ### ** Examples
   > 
   > # using lm() for linear models
   > m1 <- lm(mpg ~ wt + hp, data = mtcars)
   > konfound(m1, wt)
   Robustness of Inference to Replacement (RIR):
   To invalidate an inference,  66.521 % of the estimate would have to be due to bias. 
   This is based on a threshold of -1.298 for statistical significance (alpha = 0.05).
   
   To invalidate an inference,  21  observations would have to be replaced with cases
   for which the effect is 0 (RIR = 21).
   
   See Frank et al. (2013) for a description of the method.
   
   Citation: Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. (2013).
   What would it take to change an inference?
   Using Rubin's causal model to interpret the robustness of causal inferences.
   Education, Evaluation and Policy Analysis, 35 437-460.
   For more detailed output, consider setting `to_return` to table
   To consider other predictors of interest, consider setting `test_all` to TRUE.
   > konfound(m1, wt, test_all = TRUE)
   Note that this output is calculated based on the correlation-based approach used in mkonfound()
   Warning in data.frame(t = est_eff/std_err, df = (n_obs - n_covariates -  :
     row names were found from a short variable and have been discarded
   # A tibble: 2 × 8
     var_name     t    df action       inference pct_bias_to_change_i…¹  itcv r_con
     <chr>    <dbl> <dbl> <chr>        <chr>                      <dbl> <dbl> <dbl>
   1 wt       -6.13    29 to_invalida… reject_n…                   52.7 0.614 0.784
   2 hp       -3.52    29 to_invalida… reject_n…                   35.1 0.298 0.546
   # ℹ abbreviated name: ¹​pct_bias_to_change_inference
   > konfound(m1, wt, to_return = "table")
   Dependent variable is mpg 
   Warning: Unknown or uninitialised column: `itcv`.
   Error in abs(konfound(model_object, !!tested_variable, to_return = "raw_output")$itcv) : 
     non-numeric argument to mathematical function
   Calls: konfound -> konfound_lm -> test_sensitivity -> output_table
   Execution halted
✔  checking for unstated dependencies in ‘tests’ ...
─  checking tests ...
─  Running ‘testthat.R’ (4.7s)
E  Some test files failed
   Running the tests in ‘tests/testthat.R’ failed.
   Last 13 lines of output:
     row names were found from a short variable and have been discarded
     Backtrace:
         ▆
      1. └─konfound::konfound(testmod1, texp, test_all = TRUE, to_return = "raw_output") at test-pkonfound.R:10:1
      2.   └─konfound:::konfound_lm(...)
      3.     └─base::data.frame(...)
     
     ══ Failed tests ════════════════════════════════════════════════════════════════
     ── Failure ('test-konfound.R:30:5'): konfound works for lme4 model ─────────────
     output3$percent_bias_to_change_inference not equal to 84.826.
     target is NULL, current is numeric
     
     [ FAIL 1 | WARN 6 | SKIP 0 | PASS 16 ]
     Error: Test failures
     Execution halted
✔  checking for unstated dependencies in vignettes (346ms)
✔  checking package vignettes in ‘inst/doc’ ...
✔  checking re-building of vignette outputs (6.5s)
✔  checking for non-standard things in the check directory ...
✔  checking for detritus in the temp directory
   
   See
     ‘/home/fabians/reviews/JOSS/konfound.Rcheck/00check.log’
   for details.
   
   
── R CMD check results ──────────────────────────── konfound 0.4.0 ────
Duration: 44.4s

❯ checking examples ... ERROR
  Running examples in ‘konfound-Ex.R’ failed
  The error most likely occurred in:
  
  > base::assign(".ptime", proc.time(), pos = "CheckExEnv")
  > ### Name: konfound
  > ### Title: Perform sensitivity analysis on fitted models
  > ### Aliases: konfound
  > 
  > ### ** Examples
  > 
  > # using lm() for linear models
  > m1 <- lm(mpg ~ wt + hp, data = mtcars)
  > konfound(m1, wt)
  Robustness of Inference to Replacement (RIR):
  To invalidate an inference,  66.521 % of the estimate would have to be due to bias. 
  This is based on a threshold of -1.298 for statistical significance (alpha = 0.05).
  
  To invalidate an inference,  21  observations would have to be replaced with cases
  for which the effect is 0 (RIR = 21).
  
  See Frank et al. (2013) for a description of the method.
  
  Citation: Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. (2013).
  What would it take to change an inference?
  Using Rubin's causal model to interpret the robustness of causal inferences.
  Education, Evaluation and Policy Analysis, 35 437-460.
  For more detailed output, consider setting `to_return` to table
  To consider other predictors of interest, consider setting `test_all` to TRUE.
  > konfound(m1, wt, test_all = TRUE)
  Note that this output is calculated based on the correlation-based approach used in mkonfound()
  Warning in data.frame(t = est_eff/std_err, df = (n_obs - n_covariates -  :
    row names were found from a short variable and have been discarded
  # A tibble: 2 × 8
    var_name     t    df action       inference pct_bias_to_change_i…¹  itcv r_con
    <chr>    <dbl> <dbl> <chr>        <chr>                      <dbl> <dbl> <dbl>
  1 wt       -6.13    29 to_invalida… reject_n…                   52.7 0.614 0.784
  2 hp       -3.52    29 to_invalida… reject_n…                   35.1 0.298 0.546
  # ℹ abbreviated name: ¹​pct_bias_to_change_inference
  > konfound(m1, wt, to_return = "table")
  Dependent variable is mpg 
  Warning: Unknown or uninitialised column: `itcv`.
  Error in abs(konfound(model_object, !!tested_variable, to_return = "raw_output")$itcv) : 
    non-numeric argument to mathematical function
  Calls: konfound -> konfound_lm -> test_sensitivity -> output_table
  Execution halted

❯ checking tests ...
  See below...

❯ checking for code/documentation mismatches ... WARNING
  Codoc mismatches from documentation object 'tkonfound_fig':
  tkonfound_fig
    Code: function(a, b, c, d, thr_p = 0.05, switch_trm = TRUE, test =
                   "fisher", replace = "control")
    Docs: function(a, b, c, d, thr_p = 0.05, switch_trm = T, test =
                   "fisher", replace = "control")
    Mismatches in argument default values:
      Name: 'switch_trm' Code: TRUE Docs: T

❯ checking for hidden files and directories ... NOTE
  Found the following hidden files and directories:
    joss-paper/.github
  These were most likely included in error. See section ‘Package
  structure’ in the ‘Writing R Extensions’ manual.

❯ checking top-level files ... NOTE
  Non-standard files/directories found at top level:
    ‘10.21105.joss.05779.pdf’ ‘joss-paper’

❯ checking dependencies in R code ... NOTE
  Namespaces in Imports field not imported from:
    ‘mice’ ‘tibble’
    All declared Imports should be used.

❯ checking R code for possible problems ... [21s/11s] NOTE
  test_cop: no visible binding for global variable ‘ModelLabel’
  test_cop: no visible binding for global variable ‘coef_X’
  Undefined global functions or variables:
    ModelLabel coef_X
  
  Found if() conditions comparing class() to string:
  File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cov) == "lavaan") ...
  File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cor) == "lavaan") ...
  File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cor) == "lavaan" && class(flag_cov) == "lavaan") ...
  File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cov) == "lavaan") ...
  File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cov) == "lavaan") ...
  File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cov) == "lavaan") ...
  File ‘konfound/R/cop_pse_auxiliary.R’: if (class(flag_cov) == "lavaan") ...
  Use inherits() (or maybe is()) instead.

── Test failures ──────────────────────────────────────── testthat ────

> library(testthat)
> library(konfound)
Sensitivity analysis as described in Frank, Maroulis, Duong, and Kelcey (2013) and in Frank (2000).
For more information visit http://konfound-it.com.
> 
> test_check("konfound")
[ FAIL 1 | WARN 6 | SKIP 0 | PASS 16 ]

══ Warnings ════════════════════════════════════════════════════════════════════
── Warning ('test-konfound.R:30:5'): konfound works for lme4 model ─────────────
Unknown or uninitialised column: `percent_bias_to_change_inference`.
Backtrace:
    ▆
 1. ├─testthat::expect_equal(...) at test-konfound.R:30:5
 2. │ └─testthat::quasi_label(enquo(object), label, arg = "object")
 3. │   └─rlang::eval_bare(expr, quo_get_env(quo))
 4. ├─output3$percent_bias_to_change_inference
 5. └─tibble:::`$.tbl_df`(output3, percent_bias_to_change_inference)
── Warning ('test-mkonfound.r:11:1'): (code run outside of `test_that()`) ──────
row names were found from a short variable and have been discarded
Backtrace:
    ▆
 1. └─konfound::konfound(testmod1, texp, test_all = TRUE, to_return = "raw_output") at test-mkonfound.r:11:1
 2.   └─konfound:::konfound_lm(...)
 3.     └─base::data.frame(...)
── Warning ('test-mkonfound.r:11:1'): (code run outside of `test_that()`) ──────
Use of .data in tidyselect expressions was deprecated in tidyselect 1.2.0.
ℹ Please use `"t"` instead of `.data$t`
Backtrace:
     ▆
  1. └─konfound::konfound(testmod1, texp, test_all = TRUE, to_return = "raw_output") at test-mkonfound.r:11:1
  2.   └─konfound:::konfound_lm(...)
  3.     └─konfound::mkonfound(d, .data$t, .data$df)
  4.       ├─dplyr::pull(select(d, !!t_enquo))
  5.       ├─dplyr::select(d, !!t_enquo)
  6.       └─dplyr:::select.data.frame(d, !!t_enquo)
  7.         └─tidyselect::eval_select(expr(c(...)), data = .data, error_call = error_call)
  8.           └─tidyselect:::eval_select_impl(...)
  9.             ├─tidyselect:::with_subscript_errors(...)
 10.             │ └─rlang::try_fetch(...)
 11.             │   └─base::withCallingHandlers(...)
 12.             └─tidyselect:::vars_select_eval(...)
 13.               └─tidyselect:::walk_data_tree(expr, data_mask, context_mask)
 14.                 └─tidyselect:::eval_c(expr, data_mask, context_mask)
 15.                   └─tidyselect:::reduce_sels(node, data_mask, context_mask, init = init)
 16.                     └─tidyselect:::walk_data_tree(new, data_mask, context_mask)
 17.                       └─tidyselect:::expr_kind(expr, context_mask, error_call)
 18.                         └─tidyselect:::call_kind(expr, context_mask, error_call)
── Warning ('test-mkonfound.r:11:1'): (code run outside of `test_that()`) ──────
Use of .data in tidyselect expressions was deprecated in tidyselect 1.2.0.
ℹ Please use `"df"` instead of `.data$df`
Backtrace:
     ▆
  1. └─konfound::konfound(testmod1, texp, test_all = TRUE, to_return = "raw_output") at test-mkonfound.r:11:1
  2.   └─konfound:::konfound_lm(...)
  3.     └─konfound::mkonfound(d, .data$t, .data$df)
  4.       ├─dplyr::pull(select(d, !!df_enquo))
  5.       ├─dplyr::select(d, !!df_enquo)
  6.       └─dplyr:::select.data.frame(d, !!df_enquo)
  7.         └─tidyselect::eval_select(expr(c(...)), data = .data, error_call = error_call)
  8.           └─tidyselect:::eval_select_impl(...)
  9.             ├─tidyselect:::with_subscript_errors(...)
 10.             │ └─rlang::try_fetch(...)
 11.             │   └─base::withCallingHandlers(...)
 12.             └─tidyselect:::vars_select_eval(...)
 13.               └─tidyselect:::walk_data_tree(expr, data_mask, context_mask)
 14.                 └─tidyselect:::eval_c(expr, data_mask, context_mask)
 15.                   └─tidyselect:::reduce_sels(node, data_mask, context_mask, init = init)
 16.                     └─tidyselect:::walk_data_tree(new, data_mask, context_mask)
 17.                       └─tidyselect:::expr_kind(expr, context_mask, error_call)
 18.                         └─tidyselect:::call_kind(expr, context_mask, error_call)
── Warning ('test-mkonfound.r:11:1'): (code run outside of `test_that()`) ──────
Use of .data in tidyselect expressions was deprecated in tidyselect 1.2.0.
ℹ Please use `"term"` instead of `.data$term`
Backtrace:
     ▆
  1. └─konfound::konfound(testmod1, texp, test_all = TRUE, to_return = "raw_output") at test-mkonfound.r:11:1
  2.   └─konfound:::konfound_lm(...)
  3.     ├─dplyr::select(tidy_output, var_name = .data$term)
  4.     └─dplyr:::select.data.frame(tidy_output, var_name = .data$term)
  5.       └─tidyselect::eval_select(expr(c(...)), data = .data, error_call = error_call)
  6.         └─tidyselect:::eval_select_impl(...)
  7.           ├─tidyselect:::with_subscript_errors(...)
  8.           │ └─rlang::try_fetch(...)
  9.           │   └─base::withCallingHandlers(...)
 10.           └─tidyselect:::vars_select_eval(...)
 11.             └─tidyselect:::walk_data_tree(expr, data_mask, context_mask)
 12.               └─tidyselect:::eval_c(expr, data_mask, context_mask)
 13.                 └─tidyselect:::reduce_sels(node, data_mask, context_mask, init = init)
 14.                   └─tidyselect:::walk_data_tree(new, data_mask, context_mask)
 15.                     └─tidyselect:::expr_kind(expr, context_mask, error_call)
 16.                       └─tidyselect:::call_kind(expr, context_mask, error_call)
── Warning ('test-pkonfound.R:10:1'): (code run outside of `test_that()`) ──────
row names were found from a short variable and have been discarded
Backtrace:
    ▆
 1. └─konfound::konfound(testmod1, texp, test_all = TRUE, to_return = "raw_output") at test-pkonfound.R:10:1
 2.   └─konfound:::konfound_lm(...)
 3.     └─base::data.frame(...)

══ Failed tests ════════════════════════════════════════════════════════════════
── Failure ('test-konfound.R:30:5'): konfound works for lme4 model ─────────────
output3$percent_bias_to_change_inference not equal to 84.826.
target is NULL, current is numeric

[ FAIL 1 | WARN 6 | SKIP 0 | PASS 16 ]
Error: Test failures
Execution halted

2 errors ✖ | 1 warning ✖ | 4 notes ✖
Error: R CMD check found ERRORs
Execution halted

Exited with status 1.

which is unacceptable for R packages looking to be published on JOSS.

Additionally, none of the issues found by goodpractice::gp and pointed out by me in August seem to have been adressed sucessfully:

── GP konfound ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

It is good practice to

  ✖ write short and simple functions. These functions have high cyclomatic complexity (>50): test_sensitivity_ln
    (217), getswitch_fisher (79), tkonfound_fig (74), getswitch_chisq (70), tkonfound (70). You can make them easier to
    reason about by encapsulating distinct steps of your function into subfunctions.
  ✖ use '<-' for assignment instead of '='. '<-' is the standard, and R users and developers are used it and it is
    easier to read your code for them if you use '<-'.

    R/cop_pse_auxiliary.R:2:10
    R/cop_pse_auxiliary.R:4:13
    R/cop_pse_auxiliary.R:12:10
    R/cop_pse_auxiliary.R:15:9
    R/cop_pse_auxiliary.R:20:9
    ... and 46 more lines

  ✖ avoid long code lines, it is bad for readability. Also, many people prefer editor windows that are about 80
    characters wide. Try make your lines shorter than 80 characters

    R/concord1.R:7:81
    R/cop_pse_auxiliary.R:24:81
    R/cop_pse_auxiliary.R:26:81
    R/cop_pse_auxiliary.R:38:81
    R/cop_pse_auxiliary.R:137:81
    ... and 448 more lines

  ✖ not import packages as a whole, as this can cause name clashes between the imported packages. Instead, import
    only the specific functions you need.
  ✖ fix this R CMD check NOTE: Namespaces in Imports field not imported from: ‘mice’ ‘tibble’ All declared Imports
    should be used.
  ✖ fix this R CMD check NOTE: test_cop: no visible binding for global variable ‘ModelLabel’ test_cop: no visible
    binding for global variable ‘coef_X’ Undefined global functions or variables: ModelLabel coef_X
  ✖ fix this R CMD check WARNING: Codoc mismatches from documentation object 'tkonfound_fig': tkonfound_fig Code:
    function(a, b, c, d, thr_p = 0.05, switch_trm = TRUE, test = "fisher", replace = "control") Docs: function(a, b, c, d,
    thr_p = 0.05, switch_trm = T, test = "fisher", replace = "control") Mismatches in argument default values: Name:
    'switch_trm' Code: TRUE Docs: T
  ✖ fix this R CMD check ERROR: Running examples in ‘konfound-Ex.R’ failed The error most likely occurred in: > ###
    Name: konfound > ### Title: Perform sensitivity analysis on fitted models > ### Aliases: konfound > > ### ** Examples >
    > # using lm() for linear models > m1 <- lm(mpg ~ wt + hp, data = mtcars) > konfound(m1, wt) Robustness of Inference to
    Replacement (RIR): To invalidate an inference, 66.521 % of the estimate would have to be due to bias.  This is based on
    a threshold of -1.298 for statistical significance (alpha = 0.05). To invalidate an inference, 21 observations would
    have to be replaced with cases for which the effect is 0 (RIR = 21). See Frank et al. (2013) for a description of the
    method. Citation: Frank, K.A., Maroulis, S., Duong, M., and Kelcey, B. (2013). What would it take to change an
    inference? Using Rubin's causal model to interpret the robustness of causal inferences. Education, Evaluation and Policy
    Analysis, 35 437-460. For more detailed output, consider setting `to_return` to table To consider other predictors of
    interest, consider setting `test_all` to TRUE. > konfound(m1, wt, test_all = TRUE) Note that this output is calculated
    based on the correlation-based approach used in mkonfound() Warning in data.frame(t = est_eff/std_err, df = (n_obs -
    n_covariates - : row names were found from a short variable and have been discarded # A tibble: 2 × 8 var_name t df
    action inference pct_bias_to_change_i…¹ itcv r_con <chr> <dbl> <dbl> <chr> <chr> <dbl> <dbl> <dbl> 1 wt -6.13 29
    to_invalida… reject_n… 52.7 0.614 0.784 2 hp -3.52 29 to_invalida… reject_n… 35.1 0.298 0.546 # ℹ abbreviated name:
    ¹​pct_bias_to_change_inference > konfound(m1, wt, to_return = "table") Dependent variable is mpg Warning: Unknown or
    uninitialised column: `itcv`. Error in abs(konfound(model_object, !!tested_variable, to_return = "raw_output")$itcv) :
    non-numeric argument to mathematical function Calls: konfound -> konfound_lm -> test_sensitivity -> output_table
    Execution halted
  ✖ checking tests ... Running ‘testthat.R’ ERROR Running the tests in ‘tests/testthat.R’ failed. Last 13 lines of
    output: > library(konfound) Sensitivity analysis as described in Frank, Maroulis, Duong, and Kelcey (2013) and in Frank
    (2000). For more information visit http://konfound-it.com. > > test_check("konfound") [ FAIL 1 | WARN 6 | SKIP 0 | PASS
    16 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Failure
    ('test-konfound.R:30:5'): konfound works for lme4 model ───────────── output3$percent_bias_to_change_inference not equal
    to 84.826. target is NULL, current is numeric [ FAIL 1 | WARN 6 | SKIP 0 | PASS 16 ] Error: Test failures Execution
    halted
  ✖ fix this R CMD check NOTE: Found the following hidden files and directories: joss-paper/.github These were most
    likely included in error. See section ‘Package structure’ in the ‘Writing R Extensions’ manual.
  ✖ avoid 'T' and 'F', as they are just variables which are set to the logicals 'TRUE' and 'FALSE' by default, but
    are not reserved words and hence can be overwritten by the user.  Hence, one should always use 'TRUE' and 'FALSE' for
    the logicals.

    R/cop_pse_auxiliary.R:NA:NA
    R/cop_pse_auxiliary.R:NA:NA
    R/cop_pse_auxiliary.R:NA:NA
    R/cop_pse_auxiliary.R:NA:NA
    R/cop_pse_auxiliary.R:NA:NA
    ... and 6 more lines

At JOSS, we aim for a collaborative, constructive review style.
Please reciprocate by doing your own due dilligence before making me waste time on stuff like this after being incommunicado for months.

@fabian-s
Copy link
Author

fabian-s commented Jan 9, 2024

EDIT: maybe you did not push those latest changes to the repo --- the commit I'm on is:

commit 78ec6f9 (HEAD -> master, origin/master, origin/HEAD)
Author: Qinyun Lin qinyun.lin@gu.se
Date: Tue Dec 5 20:33:26 2023 +0100

@wwang93
Copy link
Collaborator

wwang93 commented Jan 9, 2024

@fabian-s Yes, the team hasn't merged some of the changes on the default master branch yet, we'll agree on that as soon as possible.

@fabian-s
Copy link
Author

fabian-s commented Jan 9, 2024

another edit:

image

this is extraordinarily bad SWE hygiene -- you should not push code to main that breaks your package, that's what development branches are for...

@fabian-s
Copy link
Author

fabian-s commented Jan 9, 2024

@fabian-s Yes, the team hasn't merged some of the changes on the default master branch yet, we'll agree on that as soon as possible.

ok please don't @ me again before that is done and you have an actually working and standards-compliant package, ty

@jrosenb8
Copy link

Sorry all, this is my fault - needed to manage the multiple branches better. We're meeting to address.

@jrosen48 jrosen48 mentioned this issue Feb 14, 2024
@jrosen48
Copy link
Collaborator

We have made changes to the code to reduce the cyclometric complexity, but some functions still have cyclocomp scores > 50:

                               name cyclocomp
8              check_starting_table        79
25                 getswitch_fisher        79
48                    tkonfound_fig        74
24                  getswitch_chisq        70
47                        tkonfound        70
46              test_sensitivity_ln        59
23                        getswitch        37
43                         test_cop        20
45                 test_sensitivity        15
28                         konfound        12
38                        pkonfound        12
44                         test_pse        12
35                     output_print        11
27                     isinvalidate         9
42                        taylorexp         7
51                  verify_reg_Gzcv         6
11       core_sensitivity_mkonfound         5
22                       get_t_kfnl         5
34                        output_df         5
40                   plot_threshold         4
1                    cal_delta_star         3
33                        mkonfound         3
37                     output_table         3
39                 plot_correlation         3
41                 summary.konfound         3
50                    verify_reg_Gz         3
52                verify_reg_uncond         3
5                           cal_rxz         2
6                           cal_ryz         2
7                         cal_thr_t         2
21                           get_pi         2
26                   isdcroddsratio         2
36 output_print_test_sensitivity_ln         2
2                         cal_minse         1
3                           cal_pse         1
4                           cal_rxy         1
9                           chisq_p         1
10                      chisq_value         1
12            create_konfound_class         1
13                 fisher_oddsratio         1
14                         fisher_p         1
15                      get_a1_kfnl         1
16                      get_a2_kfnl         1
17                    get_abcd_kfnl         1
18                      get_c1_kfnl         1
19                      get_c2_kfnl         1
20                        get_kr_df         1
29                     konfound_glm         1
30         konfound_glm_dichotomous         1
31                      konfound_lm         1
32                    konfound_lmer         1
49                    verify_manual         1

In inspecting the code, we have addressed the low-hanging fruit in terms of code that we can readily modularize. However, some of the code is thorny (yet the output is tested and verified for a range of inputs), see e.g. in getswitch_fisher():

Screenshot 2024-02-14 at 2 24 53 PM

Question: Given the nature of this code, and the reduction from ridiculously high cyclocomp values (originally 217 for test_sensitivity_ln.R) to merely very high scores (79, 79, 74, 74, and 70, for the five functions with scores > 50), would you consider these to be acceptable in the light of the other changes/improvements made to the package, or would you suggest or need us to further reduce these cyclocomp values?

@fabian-s
Copy link
Author

fabian-s commented Feb 15, 2024

What actually happened here is you somewhat successfully refactored test_sensitivity_ln (217 -> 59), while cyclocomp values for every other function remain completely unchanged since I opened this issue (e.g. git blame for getswitch_fisher tells me there have not been any structural changes at all in the last 4 years).

Most of this seems to be covered by tests now, so at least you may notice once your convoluted stuff breaks.

Close if you want, but IMO this is still rather terrible, and I also do not appreciate at all being gaslit and misled like this.

EDIT:

at least these remain open:

  • lots of T and F instead of TRUE/FALSE - run "lintr::T_and_F_symbol_linter()" to see where
  • I see a R CMD check NOTE:
    ❯ checking dependencies in R code ... NOTE
    Namespaces in Imports field not imported from:
    ‘cli’ ‘withr’
    All declared Imports should be used.

@jrosen48
Copy link
Collaborator

We will address those issues.

@jrosen48
Copy link
Collaborator

Also, I want to register that I was not attempting to mislead you! I wrote: "We have made changes to the code to reduce the cyclometric complexity, ...". I did not (and did not mean to!) suggest that we addressed the other issues or that this broader issue was resolved -- I was commenting on the cyclometric complexity aspect of these issues related to the code quality.

@jrosen48
Copy link
Collaborator

I recognize this has been a very rocky submission. Unusually, I want to share this to rebuild some good faith. Here's a conversation with @wwang93 and I discussing how while the cyclometric complexity was partially addressed, there were lingering issues that Wei was planning to address.

2024-02-15_09-26-18

@jrosen48
Copy link
Collaborator

jrosen48 commented Feb 19, 2024

#78 addresses:

  • lots of T and F instead of TRUE/FALSE
  • Namespaces in Imports field not imported from: ‘cli’ ‘withr’
  • remaining assignment operator issues (for assignment, changing = to <-)
  • It also addresses many of the long code lines issue, but not all, with most of the remaining being lengthy questions or comments

We have NOT yet addressed this:

  • All declared Imports should be used.

It is not immediately clear to us what is causing that not yet addressed issue, as we have used @importFrom roxygen2 tags to specify which specific functions we import in our NAMESPACE file. We are trying to track down what is causing this, but the goodpractice::gp() output doesn't provide a clear pointer in terms of a marker or its summary output.

@jrosen48
Copy link
Collaborator

The NAMESPACE file suggest it is:

import(ggplot2)
import(lavaan)
import(rlang)

@jrosen48
Copy link
Collaborator

We have addressed the remaining goodpractice issue; as an update:

  • [ x ] lots of T and F instead of TRUE/FALSE
  • [ x ] Namespaces in Imports field not imported from: ‘cli’ ‘withr’
  • [ x ] remaining assignment operator issues (for assignment, changing = to <-)
  • [ x ] All declared Imports should be used.
  • [ x ] also many of the long code lines issue, but not all, with most of the remaining being lengthy questions or comments

From goodpractice::gp(), we are seeing:

It is good practice to

  ✖ write unit tests for all functions, and all
    package code in general. 80% of code lines are covered
    by test cases.

    R/cop_pse_auxiliary.R:12:NA
    R/cop_pse_auxiliary.R:28:NA
    R/cop_pse_auxiliary.R:69:NA
    R/cop_pse_auxiliary.R:141:NA
    R/cop_pse_auxiliary.R:142:NA
    ... and 495 more lines

  ✖ write short and simple functions. These
    functions have high cyclomatic complexity (>50):
    check_starting_table (79), getswitch_fisher (79),
    tkonfound_fig (74), getswitch_chisq (70), tkonfound
    (70), test_sensitivity_ln (59). You can make them
    easier to reason about by encapsulating distinct steps
    of your function into subfunctions.
  ✖ avoid long code lines, it is bad for
    readability. Also, many people prefer editor windows
    that are about 80 characters wide. Try make your lines
    shorter than 80 characters

    R/cop_pse_auxiliary.R:233:81
    R/cop_pse_auxiliary.R:234:81
    R/cop_pse_auxiliary.R:235:81
    R/cop_pse_auxiliary.R:236:81
    R/cop_pse_auxiliary.R:237:81
    ... and 285 more lines

@fabian-s
Copy link
Author

Seems fine, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants