Skip to content

Commit

Permalink
fix check
Browse files Browse the repository at this point in the history
  • Loading branch information
mllg committed Jan 23, 2019
1 parent b21f8b7 commit bc1e0cc
Show file tree
Hide file tree
Showing 15 changed files with 64 additions and 66 deletions.
4 changes: 2 additions & 2 deletions R/AutoTuner.R
Expand Up @@ -42,7 +42,7 @@
#' task = mlr3::mlr_tasks$get("iris")
#' learner = mlr3::mlr_learners$get("classif.rpart")
#' resampling = mlr3::mlr_resamplings$get("holdout")
#' measures = mlr3::mlr_measures$mget("mmce")
#' measures = mlr3::mlr_measures$mget("classif.mmce")
#' task$measures = measures
#' param_set = paradox::ParamSet$new(
#' params = list(paradox::ParamDbl$new("cp", lower = 0.001, upper = 0.1)))
Expand Down Expand Up @@ -131,4 +131,4 @@ AutoTuner = R6Class("AutoTuner", inherit = mlr3::Learner,
}
}
)
)
)
26 changes: 13 additions & 13 deletions R/FitnessFunction.R
Expand Up @@ -3,15 +3,15 @@
#' @description
#' Implements a fitness function for \pkg{mlr3} as `R6` class `FitnessFunction`. An object of that class
#' contains all relevant informations that are necessary to conduct tuning (`mlr3::Task`, `mlr3::Learner`, `mlr3::Resampling`, `mlr3::Measure`s,
#' `paradox::ParamSet`).
#' `paradox::ParamSet`).
#' After defining a fitness function, we can use it to predict the generalization error of a specific learner configuration
#' defined by it's hyperparameter (using `$eval()`).
#' The `FitnessFunction` class is the basis for further tuning strategies, i.e., grid or random search.
#' defined by it's hyperparameter (using `$eval()`).
#' The `FitnessFunction` class is the basis for further tuning strategies, i.e., grid or random search.
#'
#' @section Usage:
#' ```
#' # Construction
#' ff = FitnessFunction$new(task, learner, resampling, param_set,
#' ff = FitnessFunction$new(task, learner, resampling, param_set,
#' ctrl = tune_control())
#'
#' # Public members
Expand All @@ -22,7 +22,7 @@
#' ff$ctrl
#' ff$hooks
#' ff$bmr
#'
#'
#' # Public methods
#' ff$eval(x)
#' ff$eval_vectorized(xts)
Expand Down Expand Up @@ -70,18 +70,18 @@
#' task = mlr3::mlr_tasks$get("iris")
#' learner = mlr3::mlr_learners$get("classif.rpart")
#' resampling = mlr3::mlr_resamplings$get("holdout")
#' measures = mlr3::mlr_measures$mget("mmce")
#' measures = mlr3::mlr_measures$mget("classif.mmce")
#' task$measures = measures
#' param_set = paradox::ParamSet$new(params = list(
#' paradox::ParamDbl$new("cp", lower = 0.001, upper = 0.1)))
#'
#'
#' ff = FitnessFunction$new(
#' task = task,
#' learner = learner,
#' resampling = resampling,
#' param_set = param_set
#' )
#'
#'
#' ff$eval(list(cp = 0.05, minsplit = 5))
#' ff$eval(list(cp = 0.01, minsplit = 3))
#' ff$get_best()
Expand Down Expand Up @@ -120,15 +120,15 @@ FitnessFunction = R6Class("FitnessFunction",
})

self$run_hooks("update_start")
# bmr = mlr3::benchmark(design = mlr3::expand_grid(task = list(self$task), learner = learners,
# bmr = mlr3::benchmark(design = mlr3::expand_grid(task = list(self$task), learner = learners,
# resampling = list(self$resampling), measures = self$measures), ctrl = self$ctrl)

# bmr = mlr3::benchmark(design = data.table::data.table(task = list(self$task), learner = learners,
# bmr = mlr3::benchmark(design = data.table::data.table(task = list(self$task), learner = learners,
# resampling = list(self$resampling), measures = self$measures), ctrl = self$ctrl)
bmr = mlr3::benchmark(design = data.table::data.table(task = list(self$task), learner = learners,

bmr = mlr3::benchmark(design = data.table::data.table(task = list(self$task), learner = learners,
resampling = list(self$resampling)), ctrl = self$ctrl)

if (is.null(self$bmr)) {
bmr$data$dob = 1L
self$bmr = bmr
Expand Down
11 changes: 5 additions & 6 deletions R/TunerGridSearch.R
Expand Up @@ -30,7 +30,7 @@
#' task = mlr3::mlr_tasks$get("iris")
#' learner = mlr3::mlr_learners$get("classif.rpart")
#' resampling = mlr3::mlr_resamplings$get("cv")
#' measures = mlr3::mlr_measures$mget("mmce")
#' measures = mlr3::mlr_measures$mget("classif.mmce")
#' task$measures = measures
#' param_set = paradox::ParamSet$new(
#' params = list(
Expand Down Expand Up @@ -65,13 +65,12 @@ TunerGridSearch = R6Class("TunerGridSearch",
# note: generate_grid_design offers param_resolutions, so theoretically we could allow different resolutions per parameter
ps = self$ff$param_set
xts = paradox::generate_design_grid(ps, resolution = self$settings$resolution)
if (self$ff$param_set$has_trafo)

if (self$ff$param_set$has_trafo)
xts = self$ff$param_set$trafo(xts)
xts = mlr3misc::transpose(xts)

xts = mlr3misc::transpose(xts$data)
self$ff$eval_vectorized(xts)
}
)
)

11 changes: 5 additions & 6 deletions R/TunerRandomSearch.R
Expand Up @@ -30,7 +30,7 @@
#' task = mlr3::mlr_tasks$get("iris")
#' learner = mlr3::mlr_learners$get("classif.rpart")
#' resampling = mlr3::mlr_resamplings$get("cv")
#' measures = mlr3::mlr_measures$mget("mmce")
#' measures = mlr3::mlr_measures$mget("classif.mmce")
#' task$measures = measures
#' param_set = paradox::ParamSet$new(
#' params = list(
Expand Down Expand Up @@ -60,16 +60,15 @@ TunerRandomSearch = R6Class("TunerRandomSearch",
n = min(self$settings$batch_size, self$terminator$remaining)
ps = self$ff$param_set
xts = generate_design_random(ps, n)
if (nrow(xts) > uniqueN(xts))

if (nrow(xts$data) > uniqueN(xts$data))
logger::log_warn("Duplicated parameter values detected.", namespace = "mlr3")

if (self$ff$param_set$has_trafo)
xts = self$ff$param_set$trafo(xts)
xts = transpose(xts)

xts = transpose(xts$data)
self$ff$eval_vectorized(xts)
}
)
)

6 changes: 3 additions & 3 deletions man/AutoTuner.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions man/FitnessFunction.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/TunerGridSearch.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/TunerRandomSearch.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

18 changes: 9 additions & 9 deletions tests/testthat/test_AutoTuner.R
Expand Up @@ -5,7 +5,7 @@ test_that("AutoTuner", {
inner_folds = 4L
inner_evals = 5L

p_measures = c("mmce", "time_train", "time_both")
p_measures = c("classif.mmce", "time_train", "time_both")

task = mlr3::mlr_tasks$get("iris")

Expand All @@ -23,34 +23,34 @@ test_that("AutoTuner", {

terminator = TerminatorEvaluations$new(inner_evals)

at = AutoTuner$new(learner, resampling, param_set, terminator, tuner = TunerRandomSearch,
at = AutoTuner$new(learner, resampling, param_set, terminator, tuner = TunerRandomSearch,
tuner_settings = list(batch_size = 10L))

# Nested Resampling:
outer_resampling = mlr3::mlr_resamplings$get("cv")
outer_resampling$param_vals = list(folds = outer_folds)
r = mlr3::resample(task, at, outer_resampling)

# Nested Resampling:
checkmate::expect_data_table(r$data, nrow = outer_folds)
nuisance = lapply(r$data$learner, function (autotuner) {
checkmate::expect_data_table(autotuner$tuner$ff$bmr$data, nrow = inner_evals * inner_folds)
checkmate::expect_data_table(autotuner$tuner$ff$bmr$aggregated, nrow = inner_evals)
expect_equal(names(autotuner$tuner$tune_result()$performance), p_measures)
expect_equal(names(autotuner$tuner$tune_result()$performance), unname(map_chr(measures, "id")))
})

row_ids_inner = lapply(r$data$learner, function (it) {
it$tuner$ff$task$row_ids[[1]]
it$tuner$ff$task$row_ids
})
row_ids_all = task$row_ids[[1]]
row_ids_all = task$row_ids

expect_equal(sort(unique(unlist(row_ids_inner))), sort(row_ids_all))
nuisance = lapply(row_ids_inner, function (ids) {
expect_true(any(! row_ids_all %in% ids))
})


at2 = AutoTuner$new(learner, resampling, param_set, terminator, tuner = TunerRandomSearch,
at2 = AutoTuner$new(learner, resampling, param_set, terminator, tuner = TunerRandomSearch,
tuner_settings = list(batch_size = 10L))

expect_null(at2$tuner)
Expand All @@ -61,4 +61,4 @@ test_that("AutoTuner", {
checkmate::expect_r6(at2$predict(task), "Prediction")

expect_equal(at2$learner$param_vals, at2$tuner$tune_result()$param_vals)
})
})
2 changes: 1 addition & 1 deletion tests/testthat/test_FitnessFunction.R
Expand Up @@ -5,7 +5,7 @@ test_that("Construction", {
learner = mlr3::mlr_learners$get("classif.rpart")
learner$param_vals = list(minsplit = 3)
resampling = mlr3::mlr_resamplings$get("holdout")
measures = mlr3::mlr_measures$mget("mmce")
measures = mlr3::mlr_measures$mget("classif.mmce")
task$measures = measures
param_set = paradox::ParamSet$new(params = list(paradox::ParamDbl$new("cp", lower = 0.001, upper = 0.1)))

Expand Down
8 changes: 4 additions & 4 deletions tests/testthat/test_TunerGridSearch.R
Expand Up @@ -6,11 +6,11 @@ test_that("TunerGridSearch", {

learner = mlr3::mlr_learners$get("classif.rpart")
learner$param_vals = list(minsplit = 3)

resampling = mlr3::mlr_resamplings$get("cv")
resampling$param_vals = list(folds = 2)
measures = mlr3::mlr_measures$mget("mmce")

measures = mlr3::mlr_measures$mget("classif.mmce")
task$measures = measures

terminator = TerminatorEvaluations$new(5)
Expand All @@ -29,7 +29,7 @@ test_that("TunerGridSearch", {
expect_equal(gs$settings$resolution, 5)
result = gs$tune()$tune_result()
expect_list(result)
expect_number(result$performance["mmce"], lower = measures$mmce$range[1], upper = measures$mmce$range[2])
expect_number(result$performance["mmce"], lower = measures$classif.mmce$range[1], upper = measures$classif.mmce$range[2])
expect_list(result$param_vals, len = 2)
expect_equal(result$param_vals$minsplit, 3)
})
8 changes: 4 additions & 4 deletions tests/testthat/test_TunerRandomSearch.R
Expand Up @@ -4,14 +4,14 @@ test_that("TunerRandomSearch", {
n_folds = 4

task = mlr3::mlr_tasks$get("iris")

learner = mlr3::mlr_learners$get("classif.rpart")
learner$param_vals = list(minsplit = 3)

resampling = mlr3::mlr_resamplings$get("cv")
resampling$param_vals = list(folds = n_folds)

measures = mlr3::mlr_measures$mget(c("mmce", "time_train", "time_both"))
measures = mlr3::mlr_measures$mget(c("classif.mmce", "time_train", "time_both"))
task$measures = measures

param_set = paradox::ParamSet$new(params = list(
Expand All @@ -27,7 +27,7 @@ test_that("TunerRandomSearch", {
expect_r6(rs, "TunerRandomSearch")
expect_data_table(bmr$data, nrow = n_folds*5)
expect_list(result)
expect_number(result$performance["mmce"], lower = measures$mmce$range[1], upper = measures$mmce$range[2])
expect_number(result$performance["mmce"], lower = measures$classif.mmce$range[1], upper = measures$classif.mmce$range[2])
expect_list(result$param_vals, len = 2)
expect_equal(result$param_vals$minsplit, 3)
})
6 changes: 3 additions & 3 deletions vignettes/tuning-01-fitness-function.Rmd
Expand Up @@ -23,15 +23,15 @@ set.seed(123)

`mlr3tuning` is an extension of `mlr3` that includes tuning.

## Basis of Tuning
## Basis of Tuning

Before we are able to tune hyperparameters, it is necessary to define the learner, task, how to evaluate a hyperparameter setting, and the hyperparameter space. Here, we will use the [iris data set](https://en.wikipedia.org/wiki/Iris_flower_data_set) and a decision tree from `rpart`:

```{r}
task = mlr3::mlr_tasks$get("iris")
learner = mlr3::mlr_learners$get("classif.rpart")
resampling = mlr3::mlr_resamplings$get("cv")
measures = mlr3::mlr_measures$mget("mmce")
measures = mlr3::mlr_measures$mget("classif.mmce")
task$measures = measures
param_set = paradox::ParamSet$new(params = list(paradox::ParamDbl$new("cp", lower = 0.001, upper = 0.1)))
```
Expand Down Expand Up @@ -83,4 +83,4 @@ ff$eval(c(cp = 0.05, minsplit = 2))
ff$bmr$aggregated
```

Note that `$eval()` is the basis of the tuning. The member function evaluates a black box function defined by the model and evaluates the parameter setting using the given resampling strategy combined by the fitness function.
Note that `$eval()` is the basis of the tuning. The member function evaluates a black box function defined by the model and evaluates the parameter setting using the given resampling strategy combined by the fitness function.
2 changes: 1 addition & 1 deletion vignettes/tuning-02-tuner.Rmd
Expand Up @@ -31,7 +31,7 @@ As mentioned in the `tuning-01-fitness-function` vignette, we have to initialize
task = mlr3::mlr_tasks$get("iris")
learner = mlr3::mlr_learners$get("classif.rpart")
resampling = mlr3::mlr_resamplings$get("holdout")
measures = mlr3::mlr_measures$mget("mmce")
measures = mlr3::mlr_measures$mget("classif.mmce")
task$measures = measures
param_set = paradox::ParamSet$new(params = list(paradox::ParamDbl$new("cp", lower = 0.001, upper = 0.1)))
Expand Down

0 comments on commit bc1e0cc

Please sign in to comment.