Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
333 lines (249 sloc) 14.1 KB
---
title: "Feature Selection"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{mlr}
%\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc}
---
```{r, echo = FALSE, message=FALSE}
library("mlr")
library("BBmisc")
library("ParamHelpers")
library("ggplot2")
library("lattice")
library("kernlab")
# show grouped code output instead of single lines
knitr::opts_chunk$set(collapse = TRUE)
set.seed(123)
```
Often, data sets include a large number of features.
The technique of extracting a subset of relevant features is called feature selection.
Feature selection can enhance the interpretability of the model, speed up the learning process and improve the learner performance.
There exist different approaches to identify the relevant features.
In the literature two different approaches exist: One is called "Filtering" and the other approach is often referred to as "feature subset selection" or "wrapper methods".
What is the difference?
- **Filter**: An external algorithm computes a rank of the variables (e.g. based on the correlation to the response).
Then, features are subsetted by a certain criteria, e.g. an absolute number or a percentage of the number of variables.
The selected features will then be used to fit a model (with optional hyperparameters selected by tuning).
This calculation is usually cheaper than "feature subset selection" in terms of computation time.
- **Feature subset selection**: Here, no ranking of features is done.
Features are selected by a (random) subset of the data.
Then, a model is fit and the performance is checked.
This is done for a lot of feature combinations in a CV setting and the best combination is reported.
This method is very computational intense as a lot of models are fitted.
Also, strictly all these models would need to be tuned before the performance is estimated which would require an additional nested level in a CV setting.
After all this, the selected subset of features is again fitted (with optional hyperparameters selected by tuning).
`mlr` supports both **[filter methods](feature_selection.html#filter-methods){target="_blank"}** and **[wrapper methods](feature_selection.html#wrapper-methods){target="_blank"}**.
# Filter methods
Filter methods assign an importance value to each feature.
Based on these values the features can be ranked and a feature subset can be selected.
You can see [here](filter_methods.html#current-methods) which algorithms are implemented.
## Calculating the feature importance
Different methods for calculating the feature importance are built into `mlr`'s function `generateFilterValuesData()`.
Currently, classification, regression and survival analysis tasks are supported.
A table showing all available methods can be found in article [filter methods](filter_methods.html){target="_blank"}.
The most basic approach is to use `generateFilterValuesData()` directly on a `Task()` with a character string specifying the filter method.
```{r}
fv = generateFilterValuesData(iris.task, method = "FSelectorRcpp_information.gain")
fv
```
`fv` is a `FilterValues()` object and `fv$data` contains a `data.frame` that gives the importance values for all features.
Optionally, a vector of filter methods can be passed.
```{r}
fv2 = generateFilterValuesData(iris.task,
method = c("FSelectorRcpp_information.gain", "FSelector_chi.squared"))
fv2$data
```
A bar plot of importance values for the individual features can be obtained using function `plotFilterValues()`.
```{r, fig.width=10}
plotFilterValues(fv2) + ggpubr::theme_pubr()
```
By default `plotFilterValues()` will create facetted subplots if multiple filter methods are passed as input to `generateFilterValuesData()`.
According to the `"information.gain"` measure, `Petal.Width` and `Petal.Length` contain the most information about the target variable `Species`.
## Selecting a feature subset
With `mlr`s function `filterFeatures()` you can create a new `Task()` by leaving out features of lower importance.
There are several ways to select a feature subset based on feature importance values:
* Keep a certain **absolute number** (`abs`) of features with highest importance.
* Keep a certain **percentage** (`perc`) of features with highest importance.
* Keep all features whose importance exceeds a certain *threshold value* (`threshold`).
Function `filterFeatures()` supports these three methods as shown in the following example.
Moreover, you can either specify the ``method`` for calculating the feature importance or you can use previously computed importance values via argument ``fval``.
```{r}
# Keep the 2 most important features
filtered.task = filterFeatures(iris.task, method = "FSelectorRcpp_information.gain", abs = 2)
# Keep the 25% most important features
filtered.task = filterFeatures(iris.task, fval = fv, perc = 0.25)
# Keep all features with importance greater than 0.5
filtered.task = filterFeatures(iris.task, fval = fv, threshold = 0.5)
filtered.task
```
## Fuse a learner with a filter method
Often feature selection based on a filter method is part of the data preprocessing and in a subsequent step a learning method is applied to the filtered data.
In a proper experimental setup you might want to automate the selection of the features so that it can be part of the validation method of your choice.
A Learner (`makeLearner()`) can be fused with a filter method by function `makeFilterWrapper()`.
The resulting Learner (`makeLearner()`) has the additional class attribute `FilterWrapper()`.
This has the advantage that the filter parameters (`fw.method`, `fw.perc.` `fw.abs`) can now be treated as hyperparameters.
They can be tuned in a nested CV setting at the same level as the algorithm hyperparameters.
You can think of if as "tuning the dataset".
### Using fixed parameters
In the following example we calculate the 10-fold cross-validated error rate [mmce](measures.html){target="_blank"} of the k-nearest neighbor classifier (`FNN::fnn()`) with preceding feature selection on the `iris` (`datasets::iris()`) data set.
We use `information.gain` as importance measure with the aim to subset the dataset to the two features with the highest importance.
In each resampling iteration feature selection is carried out on the corresponding training data set before fitting the learner.
```{r}
lrn = makeFilterWrapper(learner = "classif.fnn",
fw.method = "FSelectorRcpp_information.gain", fw.abs = 2)
rdesc = makeResampleDesc("CV", iters = 10)
r = resample(learner = lrn, task = iris.task, resampling = rdesc, show.info = FALSE, models = TRUE)
r$aggr
```
You may want to know which features have been used.
Luckily, we have called `resample()` with the argument `models = TRUE`, which means that `r$models` contains a `list` of models (`makeWrappedModel()`) fitted in the individual resampling iterations.
In order to access the selected feature subsets we can call `getFilteredFeatures()` on each model.
```{r}
sfeats = sapply(r$models, getFilteredFeatures)
table(sfeats)
```
The result shows that in the ten folds always `Petal.Length` and `Petal.Width` have been chosen (remember we wanted to have the best two, i.e. $10 \times 2$).
The selection of features seems to be very stable for this dataset.
The features `Sepal.Length` and `Sepal.Width` did not make it into a single fold.
### Tuning the size of the feature subset
In the above examples the number/percentage of features to select or the threshold value have been arbitrarily chosen.
However, it is usually unclear which subset of features will results in the best performance.
To answer this question, we can [tune](tune.html){target="_blank"} the number of features that are taken (after the ranking of the chosen algorithms was applied) as a subset in each fold.
Three tunable parameters exist in `mlr`, documented in `makeFilterWrapper()`:
* The percentage of features selected (`fw.perc`)
* The absolute number of features selected (`fw.abs`)
* The threshold of the filter method (`fw.threshold`)
In the following regression example we consider the `BostonHousing` (`mlbench::BostonHousing()`) data set.
We use a Support Vector Machine and determine the optimal percentage value for feature selection such that the 3-fold cross-validated mean squared error (`mse()`) of the learner is minimal.
Additionally, we [tune](tune.html){target="_blank"} the hyperparameters of the algorithm at the same time.
As search strategy for tuning a random search with five iterations is used.
```{r}
lrn = makeFilterWrapper(learner = "regr.ksvm", fw.method = "FSelector_chi.squared")
ps = makeParamSet(makeNumericParam("fw.perc", lower = 0, upper = 1),
makeNumericParam("C", lower = -10, upper = 10,
trafo = function(x) 2^x),
makeNumericParam("sigma", lower = -10, upper = 10,
trafo = function(x) 2^x)
)
rdesc = makeResampleDesc("CV", iters = 3)
res = tuneParams(lrn, task = bh.task, resampling = rdesc, par.set = ps,
control = makeTuneControlRandom(maxit = 5))
res
```
The performance of all percentage values visited during tuning is:
```{r}
df = as.data.frame(res$opt.path)
df[, -ncol(df)]
```
The optimal percentage and the corresponding performance can be accessed as follows:
```{r}
res$x
res$y
```
After tuning we can generate a new wrapped learner with the optimal percentage value for further use (e.g. to predict to new data).
```{r}
lrn = makeFilterWrapper(learner = "regr.lm", fw.method = "FSelector_chi.squared",
fw.perc = res$x$fw.perc, C = res$x$C, sigma = res$x$sigma)
mod = train(lrn, bh.task)
mod
getFilteredFeatures(mod)
```
# Wrapper methods
Wrapper methods use the performance of a learning algorithm to assess the usefulness of a feature set.
In order to select a feature subset a learner is trained repeatedly on different feature subsets and the subset which leads to the best learner performance is chosen.
In order to use the wrapper approach we have to decide:
* How to assess the performance: This involves choosing a performance measure that serves as feature selection criterion and a resampling strategy.
* Which learning method to use.
* How to search the space of possible feature subsets.
The search strategy is defined by functions following the naming convention
``makeFeatSelControl<search_strategy``.
The following search strategies are available:
* Exhaustive search `makeFeatSelControlExhaustive` (`?FeatSelControl()`),
* Genetic algorithm `makeFeatSelControlGA` (`?FeatSelControl()`),
* Random search `makeFeatSelControlRandom` (`?FeatSelControl()`),
* Deterministic forward or backward search `makeFeatSelControlSequential` (`?FeatSelControl()`).
## Select a feature subset
Feature selection can be conducted with function `selectFeatures()`.
In the following example we perform an exhaustive search on the
`Wisconsin Prognostic Breast Cancer` (`TH.data::wpbc()`) data set.
As learning method we use the `Cox proportional hazards model` (`survival::coxph()`).
The performance is assessed by the holdout estimate of the concordance index [cindex](measures.html){target="_blank"}).
```{r}
# Specify the search strategy
ctrl = makeFeatSelControlRandom(maxit = 20L)
ctrl
```
``ctrl`` is a`FeatSelControl()` object that contains information about the search strategy and potential parameter values.
```{r}
# Resample description
rdesc = makeResampleDesc("Holdout")
# Select features
sfeats = selectFeatures(learner = "surv.coxph", task = wpbc.task, resampling = rdesc,
control = ctrl, show.info = FALSE)
sfeats
```
``sfeats``is a `FeatSelResult` (`selectFeatures()`) object.
The selected features and the corresponding performance can be accessed as follows:
```{r}
sfeats$x
sfeats$y
```
In a second example we fit a simple linear regression model to the `BostonHousing` (`mlbench::BostonHousing()`) data set and use a sequential search to find a feature set that minimizes the mean squared error [mse](measures.html){target="_blank"}).
``method = "sfs"`` indicates that we want to conduct a sequential forward search where features are added to the model until the performance cannot be improved anymore.
See the documentation page `makeFeatSelControlSequential` (`?FeatSelControl()`) for other available sequential search methods.
The search is stopped if the improvement is smaller than ``alpha = 0.02``.
```{r}
# Specify the search strategy
ctrl = makeFeatSelControlSequential(method = "sfs", alpha = 0.02)
# Select features
rdesc = makeResampleDesc("CV", iters = 10)
sfeats = selectFeatures(learner = "regr.lm", task = bh.task, resampling = rdesc, control = ctrl,
show.info = FALSE)
sfeats
```
Further information about the sequential feature selection process can be obtained by function `analyzeFeatSelResult()`.
```{r}
analyzeFeatSelResult(sfeats)
```
## Fuse a learner with feature selection
A Learner (`makeLearner()`) can be fused with a feature selection strategy (i.e., a search strategy, a performance measure and a resampling strategy) by function `makeFeatSelWrapper()`.
During training features are selected according to the specified selection scheme.
Then, the learner is trained on the selected feature subset.
```{r}
rdesc = makeResampleDesc("CV", iters = 3)
lrn = makeFeatSelWrapper("surv.coxph", resampling = rdesc,
control = makeFeatSelControlRandom(maxit = 10), show.info = FALSE)
mod = train(lrn, task = wpbc.task)
mod
```
The result of the feature selection can be extracted by function `getFeatSelResult()`.
```{r}
sfeats = getFeatSelResult(mod)
sfeats
```
The selected features are:
```{r}
sfeats$x
```
The 5-fold cross-validated performance of the learner specified above can be computed as follows:
```{r}
out.rdesc = makeResampleDesc("CV", iters = 5)
r = resample(learner = lrn, task = wpbc.task, resampling = out.rdesc, models = TRUE,
show.info = FALSE)
r$aggr
```
The selected feature sets in the individual resampling iterations can be extracted as follows:
```{r}
lapply(r$models, getFeatSelResult)
```
# Feature importance from trained models
Some algorithms internally compute a feature importance during training.
By using `getFeatureImportance()` it is possible to extract this part from the trained model.
```{r}
task = makeClassifTask(data = iris, target = "Species")
lrn = makeLearner("classif.ranger", importance = c("permutation"))
mod = train(lrn, task)
getFeatureImportance(mod)
```
You can’t perform that action at this time.