Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix loo/add pp_checks #19

Closed
wants to merge 22 commits into from
Closed

Fix loo/add pp_checks #19

wants to merge 22 commits into from

Conversation

be-green
Copy link
Contributor

@be-green be-green commented Sep 7, 2019

Fixes looic, adds standard errors to both ELPD and looic, and tests against the BRMS package's meta-analysis models. It also updates the documentation for the loocv() function.

Previous commits of mine add posterior_predictive checks and a predict function to the package (only implemented for the rubin model thus far).

…t now it makes very strong assumptions about the data structure; this will likely be revisited later. The main changes are everything that has been added in R/predict.R, which implements both the predict and pp_check methods.
…t now it makes very strong assumptions about the data structure; this will likely be revisited later. The main changes are everything that has been added in R/predict.R, which implements both the predict and pp_check methods.
…Also added standard errors for both the looic and the ELPD of the model. The documentation has been updated and the print.baggr_cv function reurns more information. There is a testing script that runs the same model in brms and baggr for comparison purposes.
@be-green
Copy link
Contributor Author

FYI, I've added a formal test for LOO, but it looks like the failing test is in the test_mutau section.

@wwiecek
Copy link
Owner

wwiecek commented Oct 4, 2019

Travis says that the check failed because you have a dependence on brms.

Rather than fitting with a brm() function it's best to save the output object (matrix?) to check against. Otherwise we have 1/ compatibility issue, 2/ are running extra code every time we test, just to get the same output matrix from it, OK?

@@ -18,26 +11,26 @@
#' full model, prior values and _lpd_ of each model are also returned.
#' These can be examined by using `attributes()` function.
#'
#' @references Gelman, Andrew, et al. Bayesian data analysis. Chapman and Hall/CRC, 2013.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a reference like this is not the best, i.e. referencing a whole book

Comment on lines 5 to 11
fit <- brm(tau | se(se) ~ 1 + (1 | group),
data = schools,
control = list(adapt_delta = 0.95),
prior = c(set_prior("normal(0,100)", class = "Intercept"),
set_prior("uniform(0, 104.44)", class = "sd")),
file = "misc/brms_kfold_test")

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment elsewhere

@wwiecek
Copy link
Owner

wwiecek commented Oct 4, 2019

If you could just make sure that

  1. CMD check runs on your side
  2. Travis does not fail
  3. your edits are rebased to devel (I think they are, but I find github interface very confusing so thx if you can double check)

I will then review locally

@be-green
Copy link
Contributor Author

be-green commented Oct 4, 2019

I'll make these changes and push them. Thanks for the comments. I'll keep the function in there but commented out in case we want to re-run later? I can just save the kfold output object, it should be pretty small.

@wwiecek
Copy link
Owner

wwiecek commented Oct 4, 2019 via email

@wwiecek
Copy link
Owner

wwiecek commented Oct 4, 2019 via email

@be-green
Copy link
Contributor Author

be-green commented Oct 4, 2019

Yes, I can add it in. I didn't use pre-specified values since I didn't have BDA3, I just used BRMS to fit the same model with the same priors and compared. I can also include that check.

@be-green
Copy link
Contributor Author

be-green commented Oct 4, 2019

It might just be my local machine, but I've removed the references to BRMS and the tests seem to fail here:

image

rather than at the test_loo stage.

…t now it makes very strong assumptions about the data structure; this will likely be revisited later. The main changes are everything that has been added in R/predict.R, which implements both the predict and pp_check methods.
…t now it makes very strong assumptions about the data structure; this will likely be revisited later. The main changes are everything that has been added in R/predict.R, which implements both the predict and pp_check methods.
…Also added standard errors for both the looic and the ELPD of the model. The documentation has been updated and the print.baggr_cv function reurns more information. There is a testing script that runs the same model in brms and baggr for comparison purposes.
@be-green
Copy link
Contributor Author

be-green commented Oct 4, 2019

Alright, I've added everything in, made sure to rebase on your devel branch, and I'm going to push as soon as devtools::check passes locally. The error I'm running into is a compilation error, something about "did not create dll". @wwiecek have you ever seen that before?

@wwiecek
Copy link
Owner

wwiecek commented Oct 4, 2019 via email

@wwiecek
Copy link
Owner

wwiecek commented Oct 4, 2019 via email

@be-green
Copy link
Contributor Author

be-green commented Oct 4, 2019

The first mu-tau test consistently crashes my RStudio for some reason. I think I may be having build issues that are as-of-yet diagnosed, so I'll let travis decide what's working.

@wwiecek
Copy link
Owner

wwiecek commented Oct 7, 2019

Travis failed again at brms, can you just push the test without brms so that Travis can check for any errors elsewhere?

The first mu-tau test consistently crashes my RStudio for some reason. I think I may be having build issues that are as-of-yet diagnosed, so I'll let travis decide what's working.

The first check that I see is expect_error(baggr(df_mutau, "made_up_model"), "Unrecognised model"), is that the one?
Does model="mutau" work at all when you devtools::load_all()?

@be-green
Copy link
Contributor Author

be-green commented Oct 7, 2019

Looks like all the errors are fixed in travis, still some warnings though. I'll check the devtools::load_all with the mutau model. Again, the package is still not building via cmd check on my machine (even running it via RScript and no open interactive sessions) so I think it must be an issue with my compiler or something. I had a similar issue with a different model at one point that no one else could replicate.

@wwiecek
Copy link
Owner

wwiecek commented Oct 8, 2019

Looks like all the errors are fixed in travis, still some warnings though. I'll check the devtools::load_all with the mutau model. Again, the package is still not building via cmd check on my machine (even running it via RScript and no open interactive sessions) so I think it must be an issue with my compiler or something. I had a similar issue with a different model at one point that no one else could replicate.

Thanks, this looks OK! I flagged two check NOTEs/WARNINGs FYI, but no need to fix - I will edit myself.

R/predict.R Outdated
Comment on lines 14 to 18
predict.baggr <- function(x, newdata = NULL,
allow_new_levels = T, nsamples, ...) {
allow_new_levels = T, nsamples = 100, ...) {
switch(x$model,
rubin = predict_rubin(x, newdata = newdata,
allow_new_levels = allow_new_levels,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

* checking S3 generic/method consistency ... WARNING
predict:
  function(object, ...)
predict.baggr:
  function(x, newdata, allow_new_levels, nsamples, ...)
See section ‘Generic functions and methods’ in the ‘Writing R
Extensions’ manual.

R/predict.R Outdated
Comment on lines 160 to 161
pp_check.baggr <- function(x, type = "dens_overlay", nsamples = 40) {
pp_fun <- getFromNamespace(paste0("ppc_",type),ns = "bayesplot")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pp_check.baggr: no visible global function definition for
  ‘getFromNamespace’
Undefined global functions or variables:
  getFromNamespace
Consider adding
  importFrom("utils", "getFromNamespace")

@wwiecek
Copy link
Owner

wwiecek commented Oct 11, 2019

I reviewed this -- I need to rethink the predict functions, add extra functionality and write the full documentation. Therefore I pushed to a new devel-ppcheck branch. I will let you know once I'm done.

I also rebased your code & made various small fixes, I think now it will pass checks both locally & on Travis.

@wwiecek wwiecek closed this Oct 11, 2019
@be-green
Copy link
Contributor Author

@wwiecek when you have an idea of what you'd like to change/implement, let me know so I can help. Happy to write documentation or change the api.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants