Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New test fails on all but mac #70

Closed
dpc10ster opened this issue Apr 14, 2021 · 9 comments
Closed

New test fails on all but mac #70

dpc10ster opened this issue Apr 14, 2021 · 9 comments

Comments

@dpc10ster
Copy link
Owner

I added a new test following my usual style: see test-SsSampleSizeFroc.R in the tests directory:

It fails on all but latest mac.

To prevent failures I had to include the skip_on_os() as shown below; any idea? something has changed in package testthat.

contextStr <- "Sample Size FROC"
context(contextStr)
test_that(contextStr, {
 
  skip_on_os("windows") 
  skip_on_os("linux") 
  skip_on_os("solaris") 
  
  lesDistr <- c(0.7, 0.2, 0.1)
  frocNhData <- DfExtractDataset(dataset04, trts = c(1,2))
 
  fn <- paste0(test_path(), "/goodValues361/SsPower/FROC-dataset04", ".rds")
  if (!file.exists(fn)) {
    warning(paste0("File not found - generating new ",fn))
    ret <- SsFrocNhRsmModel(frocNhData, lesDistr = lesDistr)
    saveRDS(ret, file = fn)
  }
  
  x1 <- readRDS(fn)
  x2 <-SsFrocNhRsmModel(frocNhData, lesDistr = lesDistr)
  expect_equal(x1,x2)

})
@pwep
Copy link
Collaborator

pwep commented Apr 14, 2021

test-SsSampleSizeFroc.R (Sample Sample Size FROC)

Mean relative difference as reported in the failed Actions log R-CMD-check #138

Variable Mac Windows Linux (release/devel)
muMed (identical) 0.0002416307 0.002548856
lambdaMed (identical) 0.0002416319 0.002548858
nuMed (identical) 0.0002274784 0.002538516
scaleFactor (identical) 0.0001097283 0.001151393
R2 (identical) (identical) 3.976227e-08

@dpc10ster
Copy link
Owner Author

dpc10ster commented Apr 15, 2021 via email

@pwep
Copy link
Collaborator

pwep commented Apr 15, 2021

Third option sounds good to me. I see the test-SsSampleSizeROC.R uses the expect_equivalent function, and sets a tolerance.

@dpc10ster
Copy link
Owner Author

dpc10ster commented Apr 15, 2021 via email

@dpc10ster
Copy link
Owner Author

dpc10ster commented Apr 15, 2021 via email

@pwep
Copy link
Collaborator

pwep commented Apr 15, 2021

Alas, I am not in front of PC where I can do pull requests, or can test on a Windows/Linux machine, however the code should be straighforward.

Remove expect_equal(x1,x2) from the test-SsSampleSizeFroc.R test file and replace with...

expect_equivalent(x1$muMed, x2$muMed, tolerance=5e-4)
expect_equivalent(x1$lambdaMed, x2$lambdaMed, tolerance=5e-5)
expect_equivalent(x1$nuMed, x2$nuMed, tolerance=5e-5)
expect_equivalent(x1$ScaleFactor, x2$ScaleFactor, tolerance=5e-5)
expect_equivalent(x1$R2, x2$R2, tolerance=1e-7)

I'll let you change the tolerances to fit your requirements, and see if it works.

We probably want a structural test in there somewhere to make sure 5 and only 5 correctly named variables are returned from the function SsFrocNhRsmModel.

@dpc10ster
Copy link
Owner Author

dpc10ster commented Apr 15, 2021 via email

@pwep
Copy link
Collaborator

pwep commented Apr 15, 2021

A structural test could be comparing the list names (although this does not check the data types)

expect_identical( names(x1), names(x2) )

@dpc10ster
Copy link
Owner Author

dpc10ster commented Apr 15, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants