You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The resamples in tune_race_anova() are not assessed in sequence, they are assessed in random order. For models that take long time to tune, it is hard to know the current progress (how many resamples are done and how many are remaining). It would be nice to have a progress indicator as the count of finalised resamples vs the count of the remaining ones.
library(kernlab)
library(tidymodels)
library(finetune)
data(cells, package="modeldata")
cells<-cells %>% select(-case) %>% slice_head(n=1000)
set.seed(6376)
rs<- bootstraps(cells, times=5)
svm_spec<-
svm_rbf(cost= tune(), rbf_sigma= tune()) %>%
set_engine("kernlab") %>%
set_mode("classification")
svm_rec<-
recipe(class~., data=cells) %>%
step_YeoJohnson(all_predictors()) %>%
step_normalize(all_predictors())
svm_wflow<-
workflow() %>%
add_model(svm_spec) %>%
add_recipe(svm_rec)
set.seed(1)
svm_grid<-svm_spec %>%
parameters() %>%
grid_latin_hypercube(size=5)
set.seed(2)
svm_wflow %>%
tune_race_anova(
resamples=rs,
grid=svm_grid,
control= control_race(
verbose=TRUE,
verbose_elim=TRUE))
#> i Bootstrap4: preprocessor 1/1#> ✓ Bootstrap4: preprocessor 1/1#> i Bootstrap4: preprocessor 1/1, model 1/5#> ✓ Bootstrap4: preprocessor 1/1, model 1/5#> i Bootstrap4: preprocessor 1/1, model 1/5 (predictions)#> i Bootstrap4: preprocessor 1/1#> ✓ Bootstrap4: preprocessor 1/1#> i Bootstrap4: preprocessor 1/1, model 2/5#> ✓ Bootstrap4: preprocessor 1/1, model 2/5#> i Bootstrap4: preprocessor 1/1, model 2/5 (predictions)#> i Bootstrap4: preprocessor 1/1#> ✓ Bootstrap4: preprocessor 1/1#> i Bootstrap4: preprocessor 1/1, model 3/5#> ✓ Bootstrap4: preprocessor 1/1, model 3/5#> i Bootstrap4: preprocessor 1/1, model 3/5 (predictions)#> i Bootstrap4: preprocessor 1/1#> ✓ Bootstrap4: preprocessor 1/1#> i Bootstrap4: preprocessor 1/1, model 4/5#> ✓ Bootstrap4: preprocessor 1/1, model 4/5#> i Bootstrap4: preprocessor 1/1, model 4/5 (predictions)#> i Bootstrap4: preprocessor 1/1#> ✓ Bootstrap4: preprocessor 1/1#> i Bootstrap4: preprocessor 1/1, model 5/5#> ✓ Bootstrap4: preprocessor 1/1, model 5/5#> i Bootstrap4: preprocessor 1/1, model 5/5 (predictions)#> i Bootstrap1: preprocessor 1/1#> ✓ Bootstrap1: preprocessor 1/1#> i Bootstrap1: preprocessor 1/1, model 1/5#> ✓ Bootstrap1: preprocessor 1/1, model 1/5#> i Bootstrap1: preprocessor 1/1, model 1/5 (predictions)#> i Bootstrap1: preprocessor 1/1#> ✓ Bootstrap1: preprocessor 1/1#> i Bootstrap1: preprocessor 1/1, model 2/5#> ✓ Bootstrap1: preprocessor 1/1, model 2/5#> i Bootstrap1: preprocessor 1/1, model 2/5 (predictions)#> i Bootstrap1: preprocessor 1/1#> ✓ Bootstrap1: preprocessor 1/1#> i Bootstrap1: preprocessor 1/1, model 3/5#> ✓ Bootstrap1: preprocessor 1/1, model 3/5#> i Bootstrap1: preprocessor 1/1, model 3/5 (predictions)#> i Bootstrap1: preprocessor 1/1#> ✓ Bootstrap1: preprocessor 1/1#> i Bootstrap1: preprocessor 1/1, model 4/5#> ✓ Bootstrap1: preprocessor 1/1, model 4/5#> i Bootstrap1: preprocessor 1/1, model 4/5 (predictions)#> i Bootstrap1: preprocessor 1/1#> ✓ Bootstrap1: preprocessor 1/1#> i Bootstrap1: preprocessor 1/1, model 5/5#> ✓ Bootstrap1: preprocessor 1/1, model 5/5#> i Bootstrap1: preprocessor 1/1, model 5/5 (predictions)#> i Bootstrap3: preprocessor 1/1#> ✓ Bootstrap3: preprocessor 1/1#> i Bootstrap3: preprocessor 1/1, model 1/5#> ✓ Bootstrap3: preprocessor 1/1, model 1/5#> i Bootstrap3: preprocessor 1/1, model 1/5 (predictions)#> i Bootstrap3: preprocessor 1/1#> ✓ Bootstrap3: preprocessor 1/1#> i Bootstrap3: preprocessor 1/1, model 2/5#> ✓ Bootstrap3: preprocessor 1/1, model 2/5#> i Bootstrap3: preprocessor 1/1, model 2/5 (predictions)#> i Bootstrap3: preprocessor 1/1#> ✓ Bootstrap3: preprocessor 1/1#> i Bootstrap3: preprocessor 1/1, model 3/5#> ✓ Bootstrap3: preprocessor 1/1, model 3/5#> i Bootstrap3: preprocessor 1/1, model 3/5 (predictions)#> i Bootstrap3: preprocessor 1/1#> ✓ Bootstrap3: preprocessor 1/1#> i Bootstrap3: preprocessor 1/1, model 4/5#> ✓ Bootstrap3: preprocessor 1/1, model 4/5#> i Bootstrap3: preprocessor 1/1, model 4/5 (predictions)#> i Bootstrap3: preprocessor 1/1#> ✓ Bootstrap3: preprocessor 1/1#> i Bootstrap3: preprocessor 1/1, model 5/5#> ✓ Bootstrap3: preprocessor 1/1, model 5/5#> i Bootstrap3: preprocessor 1/1, model 5/5 (predictions)#> ℹ Racing will maximize the roc_auc metric.#> ℹ Resamples are analyzed in a random order.#> ℹ Bootstrap4: 3 eliminated; 2 candidates remain.#> i Bootstrap2: preprocessor 1/1#> ✓ Bootstrap2: preprocessor 1/1#> i Bootstrap2: preprocessor 1/1, model 1/2#> ✓ Bootstrap2: preprocessor 1/1, model 1/2#> i Bootstrap2: preprocessor 1/1, model 1/2 (predictions)#> i Bootstrap2: preprocessor 1/1#> ✓ Bootstrap2: preprocessor 1/1#> i Bootstrap2: preprocessor 1/1, model 2/2#> ✓ Bootstrap2: preprocessor 1/1, model 2/2#> i Bootstrap2: preprocessor 1/1, model 2/2 (predictions)#> ℹ Bootstrap2: 0 eliminated; 2 candidates remain.#> i Bootstrap5: preprocessor 1/1#> ✓ Bootstrap5: preprocessor 1/1#> i Bootstrap5: preprocessor 1/1, model 1/2#> ✓ Bootstrap5: preprocessor 1/1, model 1/2#> i Bootstrap5: preprocessor 1/1, model 1/2 (predictions)#> i Bootstrap5: preprocessor 1/1#> ✓ Bootstrap5: preprocessor 1/1#> i Bootstrap5: preprocessor 1/1, model 2/2#> ✓ Bootstrap5: preprocessor 1/1, model 2/2#> i Bootstrap5: preprocessor 1/1, model 2/2 (predictions)#> # Tuning results#> # Bootstrap sampling #> # A tibble: 5 x 5#> splits id .order .metrics .notes #> <list> <chr> <int> <list> <list> #> 1 <split [1000/370]> Bootstrap1 2 <tibble [10 × 6]> <tibble [0 × 1]>#> 2 <split [1000/358]> Bootstrap3 3 <tibble [10 × 6]> <tibble [0 × 1]>#> 3 <split [1000/365]> Bootstrap4 1 <tibble [10 × 6]> <tibble [0 × 1]>#> 4 <split [1000/365]> Bootstrap2 4 <tibble [4 × 6]> <tibble [0 × 1]>#> 5 <split [1000/360]> Bootstrap5 5 <tibble [4 × 6]> <tibble [0 × 1]>
We can't really do that in parallel (at least not with foreach). We might eventually move to use the future package for parallelism but, at this point, we don't know how many resamples are finished when running in parallel.
You don't have to run the resample sin parallel though. There is an option to negate the additional randomization.
This issue has been automatically locked. If you believe you have found a related problem, please file a new issue (with a reprex: https://reprex.tidyverse.org) and link to this issue.
The resamples in
tune_race_anova()
are not assessed in sequence, they are assessed in random order. For models that take long time to tune, it is hard to know the current progress (how many resamples are done and how many are remaining). It would be nice to have a progress indicator as the count of finalised resamples vs the count of the remaining ones.Created on 2021-05-08 by the reprex package (v1.0.0)
The text was updated successfully, but these errors were encountered: