-
Notifications
You must be signed in to change notification settings - Fork 65
/
Copy pathRecipes_and_rsample.Rmd
191 lines (146 loc) · 6.45 KB
/
Recipes_and_rsample.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
---
title: "Recipes with rsample"
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{Recipes with rsample}
output:
knitr:::html_vignette:
toc: yes
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
eval = rlang::is_installed("ggplot2") && rlang::is_installed("modeldata")
)
options(digits = 3)
library(rsample)
library(recipes)
library(purrr)
```
The [recipes](https://recipes.tidymodels.org/) package contains a data preprocessor that can be used to avoid the potentially expensive formula methods as well as providing a richer set of data manipulation tools than base R can provide. This document uses version `r packageDescription("recipes")$Version` of recipes.
In many cases, the preprocessing steps might contain quantities that require statistical estimation of parameters, such as
* signal extraction using principal component analysis
* imputation of missing values
* transformations of individual variables (e.g. Box-Cox transformations)
It is critical that any complex preprocessing steps be contained _inside_ of resampling so that the model performance estimates take into account the variability of these steps.
Before discussing how rsample can use recipes, let's look at an example recipe for the Ames housing data.
## An Example Recipe
For illustration, the Ames housing data will be used. There are sale prices of homes along with various other descriptors for the property:
```{r ames-data, message=FALSE}
data(ames, package = "modeldata")
```
Suppose that we will again fit a simple regression model with the formula:
```{r form, eval = FALSE}
log10(Sale_Price) ~ Neighborhood + House_Style + Year_Sold + Lot_Area
```
The distribution of the lot size is right-skewed:
```{r build, fig.alt = "The histogram of the lot size, showing a long right tail of the empirical distribution."}
library(ggplot2)
theme_set(theme_bw())
ggplot(ames, aes(x = Lot_Area)) +
geom_histogram(binwidth = 5000, col = "red", fill ="red", alpha = .5)
```
It might benefit the model if we estimate a transformation of the data using the Box-Cox procedure.
Also, note that the frequencies of the neighborhoods can vary:
```{r hood, fig.alt = "A bar plot of the counts of different neighborhoods, with the frequencies ranging from over 400 observations of North_Ames to only a handful for Green_Hills and Landmark."}
ggplot(ames, aes(x = Neighborhood)) + geom_bar() + coord_flip() + xlab("")
```
When these are resampled, some neighborhoods will not be included in the test set and this will result in a column of dummy variables with zero entries. The same is true for the `House_Style` variable. We might want to collapse rarely occurring values into "other" categories.
To define the design matrix, an initial recipe is created:
```{r rec_setup, message=FALSE, warning=FALSE}
library(recipes)
# Apply log10 transformation outside the recipe
# https://www.tmwr.org/recipes.html#skip-equals-true
ames <-
ames %>%
mutate(Sale_Price = log10(Sale_Price))
rec <-
recipe(Sale_Price ~ Neighborhood + House_Style + Year_Sold + Lot_Area,
data = ames) %>%
# Collapse rarely occurring jobs into "other"
step_other(Neighborhood, House_Style, threshold = 0.05) %>%
# Dummy variables on the qualitative predictors
step_dummy(all_nominal()) %>%
# Unskew a predictor
step_BoxCox(Lot_Area) %>%
# Normalize
step_center(all_predictors()) %>%
step_scale(all_predictors())
rec
```
This recreates the work that the formula method traditionally uses with the additional steps.
While the original data object `ames` is used in the call, it is only used to define the variables and their characteristics so a single recipe is valid across all resampled versions of the data. The recipe can be estimated on the analysis component of the resample.
If we execute the recipe on the entire data set:
```{r recipe-all}
rec_training_set <- prep(rec, training = ames)
rec_training_set
```
To get the values of the data, the `bake` function can be used:
```{r baked}
# By default, the selector `everything()` is used to
# return all the variables. Other selectors can be used too.
bake(rec_training_set, new_data = head(ames))
```
Note that there are fewer dummy variables for `Neighborhood` and `House_Style` than in the data.
Also, the above code using `prep()` benefits from the default argument of `retain = TRUE`, which keeps the processed version of the data set so that we don't have to reapply the steps to extract the processed values. For the data used to train the recipe, we would have used:
```{r juiced}
bake(rec_training_set, new_data = NULL) %>% head
```
The next section will explore recipes and bootstrap resampling for modeling:
```{r boot}
library(rsample)
set.seed(7712)
bt_samples <- bootstraps(ames)
bt_samples
bt_samples$splits[[1]]
```
## Working with Resamples
We can add a recipe column to the tibble. recipes has a convenience function called `prepper()` that can be used to call `prep()` but has the split object as the first argument (for easier purrring):
```{r col-pred}
library(purrr)
bt_samples$recipes <- map(bt_samples$splits, prepper, recipe = rec)
bt_samples
bt_samples$recipes[[1]]
```
Now, to fit the model, the fit function only needs the recipe as input. This is because the above code implicitly used the `retain = TRUE` option in `prep()`. Otherwise, the split objects would also be needed to `bake()` the recipe (as it will in the prediction function below).
```{r cols-fit}
fit_lm <- function(rec_obj, ...)
lm(..., data = bake(rec_obj, new_data = NULL, everything()))
bt_samples$lm_mod <-
map(
bt_samples$recipes,
fit_lm,
Sale_Price ~ .
)
bt_samples
```
To get predictions, the function needs three arguments: the splits (to get the assessment data), the recipe (to process them), and the model. To iterate over these, the function `purrr::pmap()` is used:
```{r cols-pred}
pred_lm <- function(split_obj, rec_obj, model_obj, ...) {
mod_data <- bake(
rec_obj,
new_data = assessment(split_obj),
all_predictors(),
all_outcomes()
)
out <- mod_data %>% select(Sale_Price)
out$predicted <- predict(model_obj, newdata = mod_data %>% select(-Sale_Price))
out
}
bt_samples$pred <-
pmap(
lst(
split_obj = bt_samples$splits,
rec_obj = bt_samples$recipes,
model_obj = bt_samples$lm_mod
),
pred_lm
)
bt_samples
```
Calculating the RMSE:
```{r cols-rmse}
library(yardstick)
results <- map(bt_samples$pred, rmse, Sale_Price, predicted) %>% list_rbind()
results
mean(results$.estimate)
```