Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nandb #52

Open
jeroen opened this issue Aug 14, 2019 · 3 comments

Comments

@jeroen
Copy link
Member

commented Aug 14, 2019

Fails test in both architectures like so:

-- 1. Failure: number_timeseries works (@test-number.R#254)  -------------------
sort(lfnts) not equal to sort(ans0).
1/2 mismatches
x[2]: "50again_number_n_contiguous_timeseries_frames_per_set=20_swaps=auto=0_thr
x[2]: esh=Triangle=0.68_filt=NA.tif"
y[2]: "50again_number_n_contiguous_timeseries_frames_per_set=20_swaps=auto=5029_
y[2]: thresh=Triangle=0.68_filt=NA.tif"

== testthat results  ===========================================================
[ OK: 154 | SKIPPED: 0 | WARNINGS: 0 | FAILED: 1 ]
1. Failure: number_timeseries works (@test-number.R#254)

@jeroen

This comment has been minimized.

Copy link
Member Author

commented Aug 16, 2019

Seems like this package has hardcoded test results for all different operating systems and versions of libtiff: https://github.com/rorynolan/nandb/blob/master/tests/testthat/test-number.R#L204-L255

Probably need to update the test for the new version of libtiff in rtools40? @rorynolan what does this test result failure mean? Why are these results different on all platforms?

> lfnts
[1] "50_number_n_contiguous_timeseries_frames_per_set=20_swaps=auto=0_thresh=Triangle=0.68_filt=NA.tif"
[2] "50again_number_n_contiguous_timeseries_frames_per_set=20_swaps=auto=0_thresh=Triangle=0.68_filt=NA.tif"
@rorynolan

This comment has been minimized.

Copy link

commented Aug 19, 2019

Hi @jeroen
I'm surprised that updating libtiff would have this effect. That could mean that the values in the image read by different libtiffs are different, which seems super strange. I think it's an OS thing.
These results are different on different platforms because the routines used rely on C++'s <random>, which is implementation-specific and therefore the results are different on different platforms. These tests look a bit stupid but they've saved me from doing terrible things in the past.
It's probably that the platform you're testing on has a different <random> implementation than anything I've tested on before (I've seen different results on AppVeyor and CRAN for this reason). Maybe I should be skip_on_cran() with this kind of stuff actually?
What do you think? Whatever it is, don't let this stand in your way. Should be an easy fix. I can even add in another allowance for your case.

@jeroen

This comment has been minimized.

Copy link
Member Author

commented Aug 19, 2019

Yeah we're upgrading GCC so it might use a different random number generator.

If you think it's not important, please submit a version to CRAN that skips this test. Because we cannot upgrade the toolchain if there are too many packages that fail tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.