Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

don't add additional classes if FailureModel encountered #1984

Merged
merged 8 commits into from May 23, 2019

add tests of benchmark handling failure models

  • Loading branch information...
dagola committed Oct 5, 2017
commit 2e0c899af39a217b9842747aeec5d80698e7c4f9
@@ -262,15 +262,60 @@ test_that("drop option works for BenchmarkResults_operators", {
wrapper.class = "cl")
})

test_that("benchmark handles failure models correctly", {

# Define task
task = binaryclass.task

# Define filter parameter set
filter_ps = makeParamSet(makeIntegerParam("fw.abs", lower = 1,
upper = getTaskNFeats(task)))

# Define tuning control
ctrl = makeTuneControlRandom(maxit = 10L)

# Define resampling strategies
inner = mlr::makeResampleDesc("CV", stratify = FALSE, iters = 2L)
outer = mlr::makeResampleDesc("CV", stratify = FALSE, iters = 2L)

# Define learner
quiet_learner = makeLearner("classif.__mlrmocklearners__3",
config = list("on.learner.error" = "quiet"))
quiet_learner = makeFilterWrapper(quiet_learner, fw.method = "chi.squared")

quiet_learner = makeTuneWrapper(quiet_learner, resampling = inner, control = ctrl,
measures = measures, par.set = filter_ps, show.info = TRUE)

stop_learner = makeLearner("classif.__mlrmocklearners__3",
config = list("on.learner.error" = "stop"))
stop_learner = makeFilterWrapper(stop_learner, fw.method = "chi.squared")

stop_learner = makeTuneWrapper(stop_learner, resampling = inner, control = ctrl,
par.set = filter_ps, show.info = TRUE)

warn_learner = makeLearner("classif.__mlrmocklearners__3",
config = list("on.learner.error" = "warn"))
warn_learner = makeFilterWrapper(warn_learner, fw.method = "chi.squared")

warn_learner = makeTuneWrapper(warn_learner, resampling = inner, control = ctrl,
par.set = filter_ps, show.info = TRUE)

# Tests
# Expect benchmark failing
expect_error(benchmark(learners = stop_learner, tasks = task, resamplings = outer,
keep.pred = FALSE, models = FALSE, show.info = TRUE))

# Expect benchmark warning
expect_warning(benchmark(learners = warn_learner, tasks = task, resamplings = outer,
keep.pred = FALSE, models = FALSE, show.info = TRUE))

# Expect benchmark messages
expect_message({bmr = benchmark(learners = quiet_learner, tasks = task,
resamplings = outer, keep.pred = FALSE, models = FALSE, show.info = TRUE)})
aggr_perf = getBMRAggrPerformances(bmr = bmr)

# Check result
expect_class(x = bmr, classes = "BenchmarkResult")
expect_true(object = is.na(aggr_perf[[1]][[1]]))

})
ProTip! Use n and p to navigate between commits in a pull request.
You can’t perform that action at this time.