-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lrnr_cv
predictions broken
#404
Labels
Comments
rachaelvp
changed the title
Jan 2, 2023
Lrnr_cv
predictions broken, except if specific fold number provided in predict_fold
Lrnr_cv
predictions broken, except with predict_fold
where fold_number
is "full" or a number
rachaelvp
changed the title
Jan 2, 2023
Lrnr_cv
predictions broken, except with predict_fold
where fold_number
is "full" or a numberLrnr_cv
predictions broken
Not sure what behavior you're expecting here, training and prediction have
both different numbers of folds and observations. Validation should stack
validation preds but it can only do it for the three folds that got
trained, not the ten folds in the second task.
…On Mon, Jan 2, 2023, 6:06 PM Rachael V. Phillips ***@***.***> wrote:
Learner's wrapped in Lrnr_cv and then trained do not return valid
predictions under predict method. It does not matter what full_fit is set
to. The predict_fold method appears to work when a fold_number is set to
fold number, but not when fold_number = "validation". Here is an example:
data("mtcars")
mtcars_task <- make_sl3_Task(
data = mtcars[1:10,], outcome = "mpg",
covariates = c( "cyl", "disp", "hp", "drat", "wt"), folds = 3
)
mtcars_task2 <- make_sl3_Task(
data = mtcars[11:30,], outcome = "mpg",
covariates = c( "cyl", "disp", "hp", "drat", "wt")
)
lrnr_cv_glm <- Lrnr_cv$new(Lrnr_glm$new())
cv_glm_fit <- lrnr_cv_glm$train(mtcars_task)
cv_glm_fit$predict(mtcars_task2)
# predictions1 predictions2 predictions3 predictions4 predictions5 predictions6
# 13.35737 16.83348 24.41874 23.01480 22.83049 14.21186
cv_glm_fit$predict_fold(mtcars_task2, 1)
# [1] 20.41417 17.08732 17.08732 17.08732 16.57757 16.05975 15.25448 23.92517 24.41874 23.92261
# [11] 22.67881 18.67844 18.56438 14.17586 17.71928 23.92614 22.83049 21.83903 13.20562 17.96716
cv_glm_fit$predict_fold(mtcars_task2, "validation")
# predictions1 predictions2 predictions3 predictions4 predictions5 predictions6
# 19.16572 17.08732 14.59736 17.71928 25.60515 22.84417
—
Reply to this email directly, view it on GitHub
<#404>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA5WQJVMFLGIXGGZCH65HDTWQNNQBANCNFSM6AAAAAATPE7VOA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Is the issue here not that the prediction and training tasks have different fold objects? By default, Lrnr_cv predicts cv-preds based on the folds of the prediction task. So for predicting out of sample if needs a compatible task. The following would work I think.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Learner's wrapped in
Lrnr_cv
and then trained do not return valid predictions underpredict
method. It does not matter whatfull_fit
is set to.The
predict_fold
method paritally works. It works whenfold_number
is set to a valid number (wrt total number of folds) and when it is set to"full"
(assumingLrnr_cv
was specified withfull_fit=TRUE
). It does not work work whenfold_number
is set to"validation"
.Here is an example:
The text was updated successfully, but these errors were encountered: