Permalink
Switch branches/tags
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
1047 lines (573 sloc) 30 KB
---
title: "Data Wrangling and Model Fitting using purrr"
author: "Dan Ovando"
date: '2018-02-20'
slug: practical-purrr
tags: [R, purrr]
categories: [R, tidyverse]
type: post
---
```{r setup, include=FALSE}
set.seed(123)
knitr::opts_chunk$set(echo = TRUE)
library(purrr)
library(repurrrsive)
library(stringr)
library(ggthemes)
library(gapminder)
library(modelr)
library(extrafont)
library(doParallel)
library(tidyverse)
extrafont::loadfonts(quiet = T)
pres_theme <- theme_fivethirtyeight(
base_size = 16,
base_family = "Arial Narrow"
) +
theme(axis.title = element_text())
theme_set(pres_theme)
```
These are materials from a workshop I taught for UC Santa Barbara's [eco-data-science](https://eco-data-science.github.io/) group to get people familiar with using `purrr` for their data-wrangling and modeling needs. This post covers
1. A general introduction to the workings of `purrr`
2. Using `purrr` to wrangle lists
3. Using `purrr` and `modelr` for data analysis and modeling
The `.Rmd` for this document can be found [here](https://github.com/DanOvando/weird-fishes/blob/master/content/blog/2018-02-13-practical-purrr.Rmd)
Full credit to Jenny Bryan's [excellent `purrr` tutorial](https://jennybc.github.io/purrr-tutorial/) for helping me learn `purrr` and providing the basis for the list-wrangling examples here , along with Hadley Wickham & Garret Grolemund's [R for Data Science](http://r4ds.had.co.nz). My goal is to walk you through some of the concepts outlined in these (much better) resources, and expand on some particular applications that have been useful to me.
## What is `purrr`?
`purrr` is a part of the `tidyverse`, taking on the tasks accomplished by the `apply` suite of functions in base R (and a whole lot more). Its mail ability is applying operations across many dimensions of your data, improving your ability to keep even complex analyses "tidy". At its simplest, it's basically an alternative to using the `apply` suite of packages. At its most complex, it allows you to easily move around in and manipulate multi-dimensional (and multi-type) data, making for example running factorial combinations of models and data a tidy and easy task.
There are a whole suite of functions involved with `purrr`, but the goal of this tutorial is to get the fundamentals down so that you can start incorporating `purrr` into your own code and explore higher-level abilities on your own.
## The Basics of `purrr`
To get started, `map` is the workhorse function of the `purrr` package. `map` and `apply` basically take on the tasks of a `for` loop. Suppose we wanted to accomplish the following task
```{r}
shades <- colors()[1:10]
for (i in seq_along(shades)){
print(shades[i])
}
```
At its core, what we are doing here is applying the function `print` over each of the elements in `shades`
Rather than use a loop, we could accomplish the same task using `lapply`
```{r}
a <- lapply(shades, print)
```
And lastly using `map` from `purrr `
```{r}
a <- map(shades, print)
```
This is obviously a trivial example, but you get the idea: these are three ways applying a function to a vector/list of things.
<!-- The (main) advantages of using things like `apply` or `map` over a loop in my experience are that they -->
<!-- 1. Allow you to easily escape writing nested for loops (it's much easier to ) -->
<!-- 2. Don't require you to pre-allocate space -->
<!-- 3. Can be easily plugged into other more complex operations (e.g. in a pipe) -->
<!-- 4. Can be easier to read -->
### Key `purrr` verbs
Now that you have an idea of what `map` does, let's dig into it a bit more.
`map` is the workhorse of the `purrr` family. It is basically `apply`
- The basic syntax works in the manner
- `map("Lists to apply function to","Function to apply across lists","Additional parameters")`
Since a dataframe is basically a special case of a list in which all entries have the same number of rows, we can `map` through each element (column in this case) of say `mtcars`
```{r}
map(mtcars, mean, na.rm = T)
```
So you see here we're taking the mean of each element in the dataframe (list) `mtcars`, and passing the additional option `na.rm = T` to the function.
If we save the output of the above run to an object, we see that it is now a list, instead of a dataframe.
```{r}
mtcars_means <- map(mtcars, mean, na.rm = T)
class(mtcars_means)
```
`map` by default returns a list. One nice feature of `map` and `purrr` is that we can specify the kind of output we want.
- `map_TYPE` returns an object of class TYPE, e.g.
- `map_lgl` returns logical objects
- `map_df` returns data frames, etc.
Specifying type makes it easier to wrangle different types of outputs suppose that we want a dataframe of the mean of each column in `mtcars`
```{r}
map_df(mtcars, mean, na.rm = T)
```
`map` can also be extended to deal with multiple input lists
- `map` applies a function over one list.
- `map2` applies a function over combinations of two lists in the form
- `map2(list1, list2, ~function(.x,.y), ...)`
```{r}
map2_chr(c('one','two','red','blue'), c('fish'), paste)
```
In this case, we are mapping `paste` over each combination of these two lists.
#### Programming with Functions
Notice here that `purrr` guessed what I was trying to do here (paste the two elements together). That works for very simply functions, but a lot of the time that won't work and you have to be more specific in specifying the function we are going to use and how the data we are passing to it are used. In those cases, we'll want to specify our own functions for use in `purrr`.
`purrr` is designed to help with "functional programming", which you can take broadly as trying to use functions (preferably "pure" ones) to accomplish most of your complex and repetitive tasks (don't copy and paste more then 3 times - H. Wickham)
As a very quick reintroduction to functions:
Functions in R take any number of named inputs, do operations on them, and returns the last thing produced inside the function (or whatever you specify using `return`)
```{r}
z <- 10
foo <- function(x,y) {
z <- x + y
return(z)
}
foo(x = 2,y = 3)
z
```
Notice that `z` wasn't affected by the `z` inside the function. Operations inside functions happen in the local environment of that function. "What happens in the function, stays in the function (except what you share via `return`)"
Note though that functions can "see" objects in the global environment
```{r}
a <- 10
foo <- function(x,y) {
z <- x + y + a
return(z)
}
foo(x = 2, y = 3)
```
I **strongly** recommend you avoid using global variables inside functions: it can very easily cause unintended and sneaky behavior (I've been burned before).
Functions can be as complicated as you want them to be, but a good rule is to try and make sure each function is good at doing *one* thing. That doesn't mean that that "one thing" can't be a complex thing (e.g. a population model), but the objective of the function should be to do that one thing well (produce trajectories of biomass, not biomass, diagnostics, and a knitted report all in one function).
You can also use "anonymous" functions in `purrr`. This is basically a shortcut for when you don't want to take up the space of writing and saving a whole function somewhere. You make anonymous functions with `~`
Say we want to be more specific about our call to `paste` from the above example. We could use `~` to write
```{r}
map2_chr(c('one','two','red','blue'), c('fish'), ~paste(.x,.y))
```
I've used `~` to define one-sided formula on the fly, which is handy for simple things like this. We'll see later how to write and use longer functions here. Note that by default the first argument passed to `map` is identified by `.x`, and the second `.y`.
We can also write custom functions for use. Say you want the coefficient of variation (standard deviation divided by the mean) of each of the variables in `mtcars`
```{r}
cvfoo <- function(x){
sd(x) / mean(x)
}
map(mtcars, cvfoo)
```
Can be accomplished using
```{r}
map(mtcars, cvfoo)
```
### Multiple lists using `pmap`
Anything above two lists is handled by `pmap`. `pmap` allows you to pass an arbitrary number of objects to `map`, where each object is a named element in a list, and the function takes matching elements of those lists as entries
```
pmap(list(list1,list2,list3), function(list1, list2, list3),...)
```
```{r}
dmonds <- diamonds %>%
slice(1:4)
pmap_foo <- function(list1, list2 , list3){
paste0("Diamond #", list1, " sold for $", list2," and was ", list3, " carats")
}
pmap(list(list1 = 1:nrow(dmonds), list2 = dmonds$price, list3 = dmonds$carat), pmap_foo)
```
To bring all this together together, here are three ways of doing the exact same thing; checking to see if both of the columns in a row are NA (note this is not the most efficient way to do this task, this is just to show how to use `map2` and `pmap`, anonymous functions, and custom functions)
```{r}
x <- c(1,1,NA,NA)
y <- c(1, NA, 1, NA)
z <- data_frame(x = x, y = y)
```
```{r}
z %>%
mutate(both_na = map2_lgl(x,y, ~ is.na(.x) & is.na(.y)))
```
```{r}
nafoo <- function(x,y){
out <- (is.na(x) & is.na(y))
}
z %>%
mutate(both_na = map2_lgl(x,y,nafoo))
```
```{r}
z %>%
mutate(both_na = pmap_lgl(list(x = x,y = y), nafoo))
```
## Wrangling lists
Now that we have an idea of how we can use `purrr`, let's get back to actually using `purrr` in practice.
Lists are powerful objects that allow you to store all kinds of information in one place.
They can also be a pain to deal with, since we are no longer in the nice 2-D structure of a traditional dataframe, which is much closer to how most of us probably learned to deal with data.
`purrr` has lots of useful tools for helping you quickly and efficiently poke around inside lists. Let's start with the Game of Thrones database in the `repurrrsive` package (thanks again to [Jenny Bryan](https://github.com/jennybc/repurrrsive)). `got_chars` is a list containing a bunch of information on GoT characters with a "point of view" chapter in the first few books.
The `str` function is a great way to get a first glimpse at a list's structure
```{r}
str(got_chars, list.len = 3)
```
For those of you who prefer a more interactive approach, you can also use the `jsonedit` function in the `listviewer` package if you're working in a notebook or an html document
```{r}
listviewer::jsonedit(got_chars)
```
So, how do we start poking around in this database?
Suppose we wanted only the first 5 characters in the list
```{r}
got_chars[1:5] %>%
str(max.level = 1)
```
Now, suppose that we jut want to look at the name of the first 5 characters. Who remembers how to do this in base R?
You might think that `got_chars[[1:5]]$name` would do the trick...
```{r, error = T}
got_chars[[1:5]]$name
```
Nope.
So we could do
```{r}
names <- vector(mode = "character",5)
for (i in 1:5){
names[i] <- got_chars[[i]]$name
}
names
```
That works, but certainly not ideal.
Enter `purrr`
```{r}
got_chars[1:5] %>%
map_chr('name')
```
Nice! `map` figures that when you do this, you're looking for that list entry. I actually find some of the "helps" a bit confusing when you're learning, since they play by slightly different rules than the `purrr` functions usually do. Case in point, given the above example, how do we think we might get the 'name' and 'allegiances' columns?
```{r}
got_chars[1:5] %>%
map(c('name','allegiances'))
```
Huh, why didn't that work? Passing a string to map actually tells `purrr` to dive down into recursive layers of a list (we'll see this next) In this case, if I want to extract the "name" and "allegiances" variables, I can use `[`
```{r}
got_chars[1:5] %>%
map(`[`, c('name', 'allegiances'))
```
In this case, `[` is saying use `[]` as a function.
Let's say you've got a list that goes a little deeper. Suppose that we want to extract element `w` from the list `thing` as characters. We can spell out the path that we want `purrr` to go through using `c("z","w")`, which tells `purrr` to first to to `z`, then element `w` (which is inside `z`), and return `w`.
```{r}
thing <- list(list(y = 2, z = list(w = 'hello')),
list(y = 2, z = list(w = 'world')))
map_chr(thing, c('z','w'))
```
If you're like me, the numeric indexing of each of the entries is currently driving you nuts: I'd rather have each element in the list be named by the character it refers to. You can use `set_names` to accomplish this
```{r}
got_chars[1:5] %>%
rlang::set_names(map_chr(.,'name')) %>%
listviewer::jsonedit()
```
Much better (remember that `.` refers to the objected passed to function through the pipe)!
Now, let's say that I want to get all the Lanisters, so I can see which people to root against (or for if that's your jam).
This is where a lot of the power of `purrr` starts to come in, allowing you to easily apply functions across nested layers of a list
```{r}
got_chars %>%
set_names(map_chr(.,'name')) %>%
map(`[`,c('name','allegiances')) %>%
purrr::keep(~stringr::str_detect(.$name, 'Lannister')) %>%
listviewer::jsonedit()
```
Now, suppose that we want anyone who's allied with the Starks
```{r}
got_chars[1:4] %>%
set_names(map_chr(.,'name')) %>%
map(`[`,c('name','allegiances')) %>%
map(~str_detect(.$allegiances, 'Stark'))
```
Hmmm, that doesn't look good, what's up with Will? What happens if I try and use `keep` (list `filter`) here?
```{r, eval = F,error = T}
got_chars %>%
set_names(map_chr(.,'name')) %>%
map(`[`,c('name','allegiances')) %>%
keep(~str_detect(.$allegiances, 'Stark'))
```
Nope that still doesn't work. What's going on? The problem here is that our friend Will has no allegiances, and worse yet, the allegiances entry doesn't say "none", it's just an empty array. Here's one way to solve this
```{r}
got_chars %>%
set_names(map_chr(.,'name')) %>%
map(`[`,c('name','allegiances')) %>%
keep(~ifelse(length(.$allegiances) > 0, str_detect(.$allegiances, 'Stark'),FALSE)) %>%
listviewer::jsonedit()
```
There's almost certainly a better way, but this just shows that things get a little more complicated when you're trying to apply functions across list objects; things like dimensions, types, NULLs, can cause problems. If I'm trying something new, I'll usually try and develop the methods on a subset of the list that I know is "ideal", make sure it works there, and then try the operation on progressively more complicated lists. That allows me to separate errors in my functions vs. problems reading in "odd" data types.
As Cersei likes to remind us, anyone who's not a Lannister is an enemy to the Lannisters. Let's look at all the POV characters that aren't allied to the Lannisters
```{r}
got_chars %>%
set_names(map_chr(.,'name')) %>%
map(`[`,c('name','allegiances')) %>%
discard(~ifelse(length(.$allegiances) > 0, str_detect(.$allegiances, 'Lannister'),FALSE)) %>%
listviewer::jsonedit()
```
You can also use `map` together with your own custom functions. Suppose we wanted to figure out how many aliases and alliances each character has, as well as where they were born. We can use `pmap` to apply a function over each of these attributes
```{r}
got_list <- got_chars %>%
map(`[`, c('name','aliases','allegiances','born'))
got_list <- got_chars %>% {
list(
name = map_chr(.,'name'),
aliases = map(.,'aliases'),
allegiances = map(.,'allegiances'),
born = map_chr(.,'born')
)
}
str(got_list, list.len = 3)
got_foo <- function(name, aliases, allegiances,born){
paste(name, 'has', length(aliases), 'aliases and', length(allegiances),
'allegiances, and was born in', born)
}
got_list %>%
pmap_chr(got_foo) %>%
head()
```
Things obviously get a lot more complicated than this, but hopefully that gives you an idea of how to manipulate lists using `purrr`
```{r}
got_chars %>%
set_names(map_chr(.,'name')) %>%
map(`[`,c('name','allegiances')) %>%
listviewer::jsonedit()
```
## Analysis with `purrr` and `modelr`
So far, `purrr` has basically helped us apply functions across and poke around in lists. That's nice, but its real power comes in helping with analysis. Let's look at the `gapminder` data set
```{r}
head(gapminder)
```
`gapminder` provides data on life expectancy, economics, and population for countries across the world.
```{r, echo = F}
gapminder::gapminder %>%
ggplot(aes(year, lifeExp, color = country)) +
geom_line(show.legend = F) +
facet_wrap(~continent) +
labs(title = "Life expectancy across continents")
```
Now, suppose we want to build up a model trying to predict life expectancy as a function of covariates, starting with a simple one: life expectancy as a function of population and per capita GDP
```{r}
gapminder <- gapminder %>%
set_names(colnames(.) %>% tolower())
life_mod <- lm(lifeexp ~ pop + gdppercap, data = gapminder)
```
```{r, results = 'asis', echo = F}
stargazer::stargazer(life_mod, type = 'html')
```
So now we have a *very* simple model, but how do we know if this is the model we want to use? Let's use AIC to compare a few different model structures (note, this is not an endorsement for AIC mining!)
```{r}
models <- list(
simple = 'lifeexp ~ pop + gdppercap',
medium = 'lifeexp ~ pop + gdppercap + continent + year',
more = 'lifeexp ~ pop + gdppercap + country + year',
woah = 'lifeexp ~ pop + gdppercap + year*country'
)
```
Now, since this is a simple three model example, we could just use a loop, or even copy and paste a few times. But, let's see how we can use `purrr` to help us do some diagnostics on these models.
Let's start by getting our models and data into a data frame, using list-columns
```{r}
model_frame <- data_frame(model = models) %>%
mutate(model_name = names(model))
```
Now, let's use purrr to convert each of these character strings into a model
```{r}
model_frame <- model_frame %>%
mutate(model = map(model, as.formula))
model_frame
```
```{r, eval = F}
model_frame <- model_frame %>%
mutate(fit = lm(model, data = gapminder))
```
Hmmm, why didn't that work? `mutate` by itself doesn't know how to evaluate this, but `map` can help us out
```{r}
model_frame <- model_frame %>%
mutate(fit = map(model, ~lm(., data = gapminder), gapminder = gapminder))
model_frame
```
We're now going to start integrating some methods from the `modelr` package to diagnose our regression
```{r}
model_frame <- model_frame %>%
mutate(r2 = map_dbl(fit, ~modelr::rsquare(., data = gapminder)),
aic = map_dbl(fit, ~AIC(.))) %>%
arrange(aic)
model_frame
```
So, AIC tells us that our most complext model is still the most parsimonious (of the ones we've explored here). Let's dig into this a bit further, by explicitly testing the out of sample predictive ability of each of the models. "Overfit" models are commonly really good at describing the data that they are fit to, but perform poorly out of sample.
We'll start by using the `modelr` package to create a bunch of training-test combination data sets using 10-fold cross validation.
```{r}
validate <- gapminder %>%
rsample::vfold_cv(10)
test_data <- list(test_training = list(validate), model_name = model_frame$model_name)
test_data <- cross_df(test_data) %>%
unnest(.id = "model_number") %>%
left_join(model_frame %>% select(model_name, model, fit), by = "model_name")
test_data
```
In a few lines of code, we now have "tidy" cross validation routine across multiple models, not bad.
```{r}
test_data <- test_data %>%
mutate(fit = map2(model, splits, ~lm(.x, data = rsample::analysis(.y)))) %>%
mutate(root_mean_sq_error = map2_dbl(fit, splits, ~modelr::rmse(.x,rsample::assessment(.y))))
```
```{r}
test_data %>%
ggplot(aes(root_mean_sq_error, fill = model_name)) +
geom_density(alpha = 0.75) +
labs(x = "Root Mean Squared Error", title = "Cross-validated distribution of RMSE")
```
Judging by out of sample RMSE, the most complicated model (`woah`) is still our best choice. And just like that in a few lines of code we've used `modelr` and `purrr` to easily compare a number of different model structures.
Out-of-sample RMSE is a useful metric, but there are lots of other diagnostics we might want to run. Suppose that we want to examine the fitted vs. residuals plots for each model
```{r}
gen_fit_v_resid <- function(model){
aug_model <- broom::augment()
}
test_data <- test_data %>%
mutate(aug_model = map(fit, broom::augment))
fit_plot <- test_data %>%
select(model_name,aug_model) %>%
unnest() %>%
ggplot(aes(.fitted, .resid)) +
geom_hline(aes(yintercept = 0), linetype = 2, color = "red") +
geom_point(alpha = 0.5) +
facet_wrap(~model_name, scales = "free_y")
fit_plot
```
<!-- Examining our "best" model more carefully though, we see we have some problems -->
<!-- ```{r} -->
<!-- qq_plot <- test_data %>% -->
<!-- filter(model_name == "woah") %>% -->
<!-- slice(1) %>% -->
<!-- { -->
<!-- .$fit[[1]] -->
<!-- } %>% -->
<!-- broom::augment(.) %>% -->
<!-- ggplot(aes(sample = .resid)) + -->
<!-- stat_qq() + -->
<!-- stat_qq_line(color = "red", linetype = 2) + -->
<!-- labs(y = "Deviance Residuals", title = "Normal QQ plot") -->
<!-- qq_plot -->
<!-- ``` -->
Hmmm that doesn't look good (we want the black points to fall more of less on the red dashed 1:1 line), we clearly need to spend more time with model specification (AKA this very simple model is, surprise surprise, not a good way to model life expectancy). But now, we see how we can use `purrr` and `modelr` to easily construct and compare numerous new model hypotheses in our hunt for the best one.
## Parallel `purrr`
The nature of `purrr` really lends itself to parallel processing. At it's core, `purrr` is doing a "split-apply-combine" routine, meaning that for must use cases you have a bunch of independent operations that you need your computer to run (i.e., the results of one step in `map` call do not affect the next step). This means that if you want to speed things up, you could farm those processes out to different cores on your computer. For example, if you ran an operation in parallel on four cores, you could in theory run four tasks in about the time it takes to run one task normally (it's not quite as linear as that due to startup and maintanance costs).
**WARNING**: running things in parallel can get complicated across different platforms (especially moving from Linux/OS X to Windows). Be preparred to do some work on this.
As of now, `purrr` does not have built in parallel functionality (though it may be [in the works](https://github.com/tidyverse/purrr/issues/121)). There are a few options out there though.
One is to simply step outside of `purrr` for a moment: one of the great things about the open-source world is finding the right package for the right problem, and in this case there are other options that work great.
My preferred solution to date has been the `foreach` and `doParallel` packages. The nice thing is that once you've formatted your data and model to be run through `purrr` (i.e. made things tidy), you're already set up to use tools
```{r, eval = F}
n_cores <- floor(parallel::detectCores()/2)
doParallel::registerDoParallel(cores = n_cores)
fits <- foreach::foreach(i = 1:nrow(test_data)) %dopar% {
lm(test_data$model[[i]], data = analysis(test_data$splits[[i]]))
}
test_data$fit <- fits
```
There is also a new package called [`furrr`](https://davisvaughan.github.io/furrr/) which looks really promising. This allows you to run things in parallel by simply setting up a cluster and appending `future_` to your `map` call (leveraging the [`future`](https://github.com/HenrikBengtsson/future) package).
```{r}
library(furrr)
future::plan(multiprocess(workers = 1))
start <- Sys.time()
test_data <- test_data %>%
mutate(par_fits = future_map2(model, splits, ~lm(.x, data = rsample::analysis(.y)),.progress = T)
)
Sys.time() - start
```
## Miscellaneos `purrr`
That's a broad tour of the key features of `purrr`. Here's a few more examples of miscellaneous things you can do with `purrr`
### Debugging using `safely`
One annoying thing about using `map` (or `apply`) in place of loops is that it can make debugging much harder to deal with. With a loop, it's easy to see where exactly an error occurred and your loop failed (e.g. look at the index of the loop when the error occurred). With `map`, it can be much harder to figure out where the problem is, especially if you have a very large list that you're mapping over.
The `safely` function lets us solve this problem.
Suppose that you've got a bunch of csv's of fish lengths from a field site. The field techs are supposed to enter the length in one column, and the units in a second column. As tends to happen though, some techs put the units next to the length (e.g. 26cm), instead of in separate `lengths` and `units` columns. Suppose then that we want to pull in our lengths and log transform them, since we suspect that the lengths are log-normally distributed and we'd like to run an OLS regression on them.
To simulate our data...
```{r}
fish_foo <- function() {
bad_tech <- ifelse(runif(1, 0, 10) > 2, FALSE, TRUE)
if (bad_tech == F) {
lengths <- rnorm(10, 25, 5) %>% signif(3)
units <- "cm"
} else {
lengths <- paste0(rnorm(10, 25, 5) %>% signif(3), "cm")
units <- "what's this column for?"
}
out <- data_frame(lengths = lengths, units = units)
return(out)
}
length_data <- rerun(100, fish_foo()) %>%
set_names(paste0("tech", 1:100))
listviewer::jsonedit(length_data)
```
Our goal is to put all of these observations together, log transform the lengths, and plot. Using `map`, we know that we can use
```{r, eval = F}
length_data %>%
map(~log(.x$lengths))
```
Yep, that doesn't work, since `map` hit an error somewhere in there (it doesn't know how to take the log of (`25cm`). Now, we could go through all 100 entries and see which ones are bad, or concatenate them earlier and look for NAs after conversion to type numeric, but let's see how we can use `purrr` to deal with this.
```{r}
safe_log <- safely(log)
diagnose_length <- length_data %>%
map(~safe_log(.$lengths))
head(diagnose_length,2)
```
Great, now we at least have something to work with. `safely` gives us two objects per entry: the data if it worked, and a log of the error messages if it didn't.
How do we figure out which tech's are the problem? We can use `map_lgl` to help us out here
```{r}
bad_lengths <- map_lgl(diagnose_length, ~is.null(.x$error) == F)
bad_techs <- diagnose_length %>%
keep(bad_lengths)
names(bad_techs)
```
I leave it to your imagination to think of how to resolve this problem, but at least we now know where the problem is. One strategy is to use the handy `possibly` function from `purrr`. This basically says try a function and return its value if it works, otherwise return something else.
```{r}
possibly_log <- possibly(log, otherwise = NA)
diagnose_length <- length_data %>%
map(~possibly_log(.$lengths))
listviewer::jsonedit(diagnose_length)
```
### Find and Convert Factors
Factors can creep into your data, which can cause problem sometimes. There's lot's of ways to solve this, but you can use `purrr` to efficiently check for factors, and convert them to characters in your data frame.
Let's take a look at the `gapminder` dataset, from the `gapminder` package.
```{r}
gapminder
```
Yep, look at that, country and continent are both factors. Useful for regression, but a little dangerous to have in your raw data.
We can use `purrr` to find all the factors in our data
```{r}
gapminder %>%
map_lgl(is.factor)
```
And to convert each column that is a factor into a character, we could try `map_if`, which applies a conditional statement to each element in the list, and applies the function if the test is passed
```{r}
gapminder %>%
map_if(is.factor, as.character) %>%
str(2)
```
Huh well that worked, but something is weird. Our nice `gapminder` dataframe is now a list. How can we do this and keep things as a dataframe? We can use the `purrrlyr` package to do this, which has handy parallels of the functions in `purrr`, but designed to deal with and give back dataframes.
```{r}
gapminder %>%
purrrlyr::dmap_if(is.factor, as.character) %>%
head()
```
Much better, those pesky factors are now characters and we still have a dataframe.
### Print All Plots to PDF
Suppose you've got a large project and want to save (or print) all the plots. This often leads to a lot of copy and pasting of save commands.
Here's another solution, using `walk`. Remember, the `walk` family works the same way as the `map` family, but doesn't return anything. Rather, `walk` just produces the "side effects" of a function, e.g. saving objects to a pdf.
I usually tag all my ggplot objects that I want to save with `_plot`
```{r}
life_v_money_plot <- gapminder %>%
ggplot(aes(gdppercap, lifeexp)) +
geom_abline(aes(slope = 1, intercept = 0)) +
geom_point() +
geom_smooth(method = 'lm')
life_v_money_plot
```
```{r}
life_v_time_plot <- gapminder %>%
ggplot(aes(year, lifeexp)) +
geom_point() +
geom_smooth(method = 'lm')
life_v_time_plot
```
Suppose I want to save both of these plots?
```{r}
plot_files <- ls()[ str_detect(ls(), '_plot')]
plot_foo <- function(x){
ggsave(paste0(x,'.pdf'), get(x), device = cairo_pdf)
}
walk(plot_files, plot_foo)
```
And just like that, I've saved PDFs of all of my plots.
### Partial
I just really like this one. Suppose you've got something that you are copy and pasting a lot, like getting interquartile range of something.
```{r}
gapminder %>%
summarise(
mean_gdp = mean(gdppercap),
lower_gdp = quantile(gdppercap, 0.25),
upper_gdp = quantile(gdppercap, 0.75),
mean_life = mean(lifeexp),
lower_life = quantile(lifeexp, 0.25),
upper_life = quantile(lifeexp, 0.75)
)
```
Works, and in this case not hard, but still annoying to retype!
```{r}
lower = partial(quantile, probs = 0.25)
upper = partial(quantile, probs = 0.75)
gapminder %>%
summarise(
mean_gdp = mean(gdppercap),
lower_gdp = lower(gdppercap),
upper_gdp = upper(gdppercap),
mean_life = mean(lifeexp),
lower_life = lower(lifeexp),
upper_life = upper(lifeexp)
)
```
And that's about it, hopefully this helps you get started incorporated `purrr` into your programming.