Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Altering hyperpriors in bma to avoid numerical problems #75

Open
wants to merge 17 commits into
base: master
Choose a base branch
from

Conversation

drnickisaac
Copy link
Contributor

We found a problem whereby 0 was included in the prior range of sd parameters: when converted into precisions, this permits Inf to be included in the distribution of precision parameters, thus causing fail in JAGS. Fix is to add a small number to the hyperpriors.
A few other documentation issues fixed

@drnickisaac
Copy link
Contributor Author

This is failing five separate tests of bma(), where it's expecting the results to equal certain specific values. I thought it was due to the hyperprior changes, but I reverted locally and the same error trips. So I'm wondering if this is related to a change in the RNG, as happened in sparta.
BiologicalRecordsCentre/sparta#219
What do you think @AugustT ?

@AugustT
Copy link
Member

AugustT commented Mar 9, 2021

error relate to checks of BMA, looks like output values don't quite match the expected.

Copy link
Member

@AugustT AugustT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just need to address the test failures

@AugustT
Copy link
Member

AugustT commented Mar 9, 2021

@drnickisaac I think you might be right. Smells like the same problem we had with sparta. That seemed to be an R version or OS issue with random number generation. This is an on-going problem but I can't think of a better test. Hopefully we can fix this here and they wont mess with the random number generators in the future

@AugustT
Copy link
Member

AugustT commented Mar 9, 2021

While you are at it did you see the warnings about seed?

��� Warning (testbma.R:17:3): simple run ────────────────────────────────────────
The 'seed' argument will be deprecated in the next version. You can set it yourself with set.seed() instead.
Backtrace:
 1. BRCindicators::bma(...) testbma.R:17:2
 2. jagsUI::jags(...)
 3. jagsUI:::process.input(...)
 4. jagsUI:::gen.inits(inits, n.chains, seed, parallel)

We should change to use set.seed() rather than the seed argument to JAGS. 2 birds 1stone.

@drnickisaac
Copy link
Contributor Author

@AugustT
I think it is related to this discussion about Mac vs Windows errors. and this one too.

It looks like you fixed it in a PR to my branch on 5th Feb. It looks like the fix was as simple as adding set.seed(125) before every example, rather than just the first one.

@drnickisaac
Copy link
Contributor Author

I've looked at testbma: all but one of the tests already had the seed set explicitly, so that can't explain the multiple failures seen on Travis.
Any ideas @AugustT ?

@AugustT
Copy link
Member

AugustT commented Mar 29, 2021

I think in the sparta case I was able to replicate the error on my PC and then use the new numbers to update my test. I dont think I have done that for the BRCindicators case

@AugustT
Copy link
Member

AugustT commented Mar 29, 2021

I cant get the random number stuff fixed. Sparta uses R2jags, BRCindicators uses JAGSui, that is the only thing I can see that explains the difference. For now I think we will have to remove this test.

@AugustT
Copy link
Member

AugustT commented Mar 29, 2021

I get a few warnings At least one Rhat value could not be calculated. I assume this is because the test dataset is so small?

@drnickisaac
Copy link
Contributor Author

drnickisaac commented Mar 29, 2021 via email

@AugustT
Copy link
Member

AugustT commented Mar 29, 2021

Oh right, is that a warning we should handle then?

@drnickisaac
Copy link
Contributor Author

drnickisaac commented Mar 29, 2021 via email

@AugustT
Copy link
Member

AugustT commented Jul 14, 2021

It seems like travis tests are no longer working @robboyd was going to look into alternatives

@DavidRoy
Copy link

@JimBacon is investigating Travis replacement options for Indicia. You can probably stick with Travis as this work is non-commercial?
Indicia-Team/warehouse#375

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants