Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WISH: build_tests() to compile individual, vanilla tests/*.R scripts from inst/tests/*.R #78

Open
HenrikBengtsson opened this issue Feb 26, 2021 · 3 comments

Comments

@HenrikBengtsson
Copy link
Contributor

The following has been a long-term secret wish of mine. It's a verbatim copy from what I just wrote in the Bioconductor Slack (https://community-bioc.slack.com/archives/CLUJWDQF4/p1614364030025500?thread_ts=1614333373.019400&cid=CLUJWDQF4):


The fact that you get R CMD check error output for each failed tests/*.R script is the number one reason why I (still) stay with bare-bone, vanilla tests/*.R scripts.

All other test frameworks (RUnit, testthat, tinytest, ...) rely on a single tests/testall.R file that then runs the test scripts living in some other folder. This means that these frameworks can only get at most _R_CHECK_TESTS_NLINES_ (=13) lines of error output total regardless of how many test scripts fail. This is an unfortunate limitation, particularly when you try to troubleshoot errors on a remote machine (e.g. CRAN). To solve this, R Core would have to implement something different for these test frameworks.

However, a feasible workaround would be to have these test frameworks generate individual tests/*.R
files from, say, inst/tests/*.R, e.g. tinytest::build_tests(). Basically, a pre-compiler. If this could be done automatically during R CMD build or R CMD check that would be awesome, but I'm not sure there's such a "hook".


@markvanderloo
Copy link
Owner

markvanderloo commented Feb 26, 2021

Yes, I understand the point. It would be easy to statically generate those test files under /tests, but doing this dynamically depends on how R CMD check determines what files to run. My guess is that it does a dir() once and then runs file by file.

On the other hand, if you use things like stopifnot() for testing (w/o testing framework) then R CMD check will fail at the first test that fails, while other failures might be interesting as well. So the feature you're suggesting would have the best of both worlds.

@HenrikBengtsson
Copy link
Contributor Author

HenrikBengtsson commented Feb 26, 2021

My guess is that it does a dir() once and then runs file by file.

Yes; https://github.com/wch/r-source/blob/74bb4f560ee02070ae631e1ca66ef6a16f256b24/src/library/tools/R/testing.R#L554-L558

On the other hand, if you use things like stopifnot() for testing (w/o testing framework) then R CMD check will fail at the first test that fails, while other failures might be interesting as well. ...

Not 100% sure I'm following; that's also the problem already now, e.g. if you use stopifnot() in a tinytest script, it'll also terminate immediately, or?

So, the gist of my idea is basically just something like:

build_tests <- function(dest = "tests", src = "inst/tests", pattern = "test_.*[.][rR]$") {
  dir.create(dest, recursive = TRUE)
  files <- dir(path = src, pattern = pattern, full.names = TRUE)
  outfiles <- character(0L)
  for (file in files) {
    outfile <- file.path(dest, basename(file))
    cat(file = outfile, "## This was automatically generated yada yada\n")
    cat(file = outfile, "library(tinytest)\n", append = TRUE)
    bfr <- readLines(file, warn = FALSE)
    cat(file = outfile, bfr, sep = "\n", append = TRUE)
    outfiles <- c(outfiles, outfile)
  }
  invisible(outfiles)
}

If there are setup/breakdown scripts, I guess those need to be handled specially. (Don't know yet how those works)

@markvanderloo
Copy link
Owner

markvanderloo commented Feb 26, 2021

Sorry, I wasn't clear. AFAICT if you test w/o a framework, than you have no option other than use stopifnot() (or stop()) in your test files. And there's no way to see what would happen in that same file for the next tests. In a test framework, the output of all expect_* functions are captured. And in tinytest you can even condition on them like

if (expect_true(1 != 1)){
   expect_equal(foo, bar)
} else {
  expect_equal(foo, baz)
}

btw, you found that piece of R CMD check code pretty quick 8-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants