-
-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update #88
Update #88
Conversation
Codecov Report
@@ Coverage Diff @@
## master #88 +/- ##
=========================================
- Coverage 62.34% 61.5% -0.84%
=========================================
Files 29 30 +1
Lines 1041 1060 +19
=========================================
+ Hits 649 652 +3
- Misses 392 408 +16
Continue to review full report at Codecov.
|
Codecov Report
@@ Coverage Diff @@
## master #88 +/- ##
=========================================
Coverage ? 62.94%
=========================================
Files ? 28
Lines ? 1101
Branches ? 0
=========================================
Hits ? 693
Misses ? 408
Partials ? 0
Continue to review full report at Codecov.
|
What about making both available? Does that make sense? I must take a closer look at the statistics |
Yes they are, currently, toggled by the |
perhaps a |
true, method is more consistent with our other functions. |
We did not have the cauchy, poisson and t-functions on CRAN, so no need to deprecate. |
I am checking the vignette. I've simplified some code a little, especially in plots: although a bit uglier, I think it's worth in such tutorials for beginners to keep the code as clear as possible for them so they can really get what is going on at each step I've mostly commented out the chunks, but if you agree we can remove them ;) |
BTW there's an issue in the update method:
When subset is NULL, no x is generated. |
I also have issues with the BIC approx: > bayesfactor_models(m1, m2, m3, m4, denominator = 1)
Error in mBIC - mBIC[denominator] :
non-numeric argument to binary operator |
There are some issues related to non-updated code, that makes some tests fail and prevent me to complete the vignette. Nevertheless, it's great! A very good overview of the purpose and capabilities of these features. |
The fact that it doens't work on all R versions is quite problematic though, and it's unfortunately very hard to test and assess 😕 |
This whole thing is super weird, because my original |
even worse is that I recall that I ran without any probs at the beginning... |
Might have been something local to your setup? And any new installation of R would have worked? |
I'll try tomorrow at my uni's pc which has a different version, we'll see :) |
Have you guys already discovered the debugging-feature in RStudio, or are you continuously using reprexes to debug your code? :-P |
@strengejacke Do you know that I've been trying to understand debugging for R and python for like 4 years... And yet I still print(1), print(2) and print(3) 😅 You mean using pointbreaks and stuff? |
Probably, it's called breakpoints... Not suprising it already took 4 years! :-D |
sometimes like Yoda I speak 😁 |
The problem is inside the sub-function if (length(random_parts) == 0) {
return(stats::setNames(fix_trms, fix_trms))
} We have: as.data.frame(BFmodels)
#> Model log.BF
#> 1 Species 67.30315
#> 2 Species + Petal.Length 128.40744
#> 3 Species * Petal.Length 125.12953
#> 4 1 0.00000 Rows are reorder, so for (m in seq_len(nrow(df.model))) {
tmp_terms <- make_terms(df.model$Modelnames[m])
df.model[m, tmp_terms] <- TRUE
} Since the "1" has no term labels when coerced to fomula., make_term <- function(formula) {
formula.f <- stats::as.formula(paste0('~', formula))
all.terms <- attr(stats::terms(formula.f), "term.labels")
fix_trms <- all.terms[!grepl("\\|", all.terms)] # no random
random_parts <- paste0(all.terms[grepl("\\|", all.terms)]) # only random
if (length(random_parts) == 0) {
return(stats::setNames(fix_trms, fix_trms))
}
... In the next step of the for-loop, you assign df.model[m, tmp_terms] <- TRUE This works on R3.6, but not on R < 3.6 (just tested here). But: I don't understand the whole code. This loop for (m in seq_len(nrow(df.model))) {
tmp_terms <- make_terms(df.model$Modelnames[m])
df.model[m, tmp_terms] <- TRUE
} generates column names from terms, but these columns do not exist in the What is the purpose of that loop and the Also, |
Wow, good eye @strengejacke ! (Stupid intercept-only models...) I just pushed a fix for this (note that it works fine for me on R 3.5.3, and it did even before that.)
library(bayestestR)
mo0 <- lm(Sepal.Length ~ 1, data = iris)
mo1 <- lm(Sepal.Length ~ Species, data = iris)
mo2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris)
mo3 <- lm(Sepal.Length ~ Species * Petal.Length, data = iris)
BFmodels <- bayesfactor_models(mo1, mo2, mo3, denominator = mo0)
bayestestR:::.get_model_table(BFmodels, priorOdds = NULL)
#> Modelnames priorProbs postProbs Species Petal.Length Species:Petal.Length
#> 1 1 0.25 1.649232e-56 FALSE FALSE FALSE
#> 2 Species 0.25 2.796852e-27 TRUE FALSE FALSE
#> 3 Species + Petal.Length 0.25 9.636634e-01 TRUE TRUE FALSE
#> 4 Species * Petal.Length 0.25 3.633659e-02 TRUE TRUE TRUE Created on 2019-05-06 by the reprex package (v0.2.1) What the internal
This is how the df above is built. |
Ah, I see! And R<3.6 fails with the creation of new variables when there was an empty character. And indeed, the interactions like |
The current version passes all checks and all at my place Just need to please lord travis now:
Either by skipping these tests on travis if we don't find any other way. |
The only Travis I know is Travis Barker... I don't think he'll be much help... |
Unfortunately, this travis is the one that rhythms our lives now 😥 🥁 |
merging 🎉 Thanks for this long but fruitful exchange |
Added a new version of
p_direction
based on AUC. My idea is that for posteriors with very few samples, estimating the density function and derivating the pd from that could be more sensitive (and accurate?) than taking the ratio of samples.I will add these two versions to the comparison to see how it performs. What do you think?