Read flat files (csv, tsv, fwf) into R
Clone or download
Failed to load latest commit information.
.github Revert "Turn off message temporarily, to reduce spam" Sep 25, 2018
R Expand on mention of compressed formats (#897) Oct 15, 2018
data-raw Add challenging data for guessing column types Jul 10, 2016
docs Build site Jan 12, 2018
inst/extdata Allow skipping lines in fwf based on a comment string Jul 14, 2016
man Expand on mention of compressed formats (#897) Oct 15, 2018
notes Ignore the correct vignette. Oct 16, 2015
revdep Update revdep results Dec 14, 2017
src convert path to the native locale before passing to boost::interproce… Jun 5, 2018
tests Add support for day of week in parse_datetime (#763) Jun 3, 2018
tools Move logo into tools, so it can be found on CRAN landing pages Mar 20, 2017
travis Try moving logic to Apr 19, 2017
vignettes Minor updates for site Jan 23, 2017
.Rbuildignore Add move-issues bot (#860) Jun 3, 2018
.gitattributes Set merge strategy for the news file Jun 8, 2016
.gitignore Add file to ignore files May 15, 2017
.travis.yml Try moving logic to Apr 19, 2017 Add code of conduct Dec 14, 2017
DESCRIPTION Documentation update to clarify column abbreviations and link parsing… Sep 14, 2018
LICENSE Add license file May 21, 2015
NAMESPACE Add a trim_ws parameter to parse functions Dec 11, 2017 convert path to the native locale before passing to boost::interproce… Jun 5, 2018
README.Rmd Link cheatsheet directly to raw PDF May 11, 2018 Link cheatsheet directly to raw PDF May 11, 2018
_pkgdown.yml More descriptions on reference page Jan 23, 2017
appveyor.yml Add appveyor builds Aug 2, 2016
codecov.yml Add codecov config file Jun 8, 2016 Add rational for connections API usage Jan 12, 2018
readr.Rproj Don't build vignettes (to save time when checking) Sep 3, 2015


CRAN_Status_Badge Build Status AppVeyor Build Status Coverage Status


The goal of readr is to provide a fast and friendly way to read rectangular data (like csv, tsv, and fwf). It is designed to flexibly parse many types of data found in the wild, while still cleanly failing when data unexpectedly changes. If you are new to readr, the best place to start is the data import chapter in R for data science.


# The easiest way to get readr is to install the whole tidyverse:

# Alternatively, install just readr:

# Or the the development version from GitHub:
# install.packages("devtools")



readr is part of the core tidyverse, so load it with:

#> ── Attaching packages ────────────────────────────────── tidyverse 1.2.1 ──
#> ✔ ggplot2 2.2.1          ✔ purrr   0.2.4     
#> ✔ tibble  1.3.4          ✔ dplyr   0.7.4     
#> ✔ tidyr   0.7.2          ✔ stringr 1.2.0     
#> ✔ readr     ✔ forcats 0.2.0
#> ── Conflicts ───────────────────────────────────── tidyverse_conflicts() ──
#> ✖ dplyr::filter() masks stats::filter()
#> ✖ dplyr::lag()    masks stats::lag()

To accurately read a rectangular dataset with readr you combine two pieces: a function that parses the overall file, and a column specification. The column specification describes how each column should be converted from a character vector to the most appropriate data type, and in most cases it's not necessary because readr will guess it for you automatically.

readr supports seven file formats with seven read_ functions:

  • read_csv(): comma separated (CSV) files
  • read_tsv(): tab separated files
  • read_delim(): general delimited files
  • read_fwf(): fixed width files
  • read_table(): tabular files where colums are separated by white-space.
  • read_log(): web log files

In many cases, these functions will just work: you supply the path to a file and you get a tibble back. The following example loads a sample file bundled with readr:

mtcars <- read_csv(readr_example("mtcars.csv"))
#> Parsed with column specification:
#> cols(
#>   mpg = col_double(),
#>   cyl = col_double(),
#>   disp = col_double(),
#>   hp = col_double(),
#>   drat = col_double(),
#>   wt = col_double(),
#>   qsec = col_double(),
#>   vs = col_double(),
#>   am = col_double(),
#>   gear = col_double(),
#>   carb = col_double()
#> )

Note that readr prints the column specification. This is useful because it allows you to check that the columns have been read in as you expect, and if they haven't, you can easily copy and paste into a new call:

mtcars <- read_csv(readr_example("mtcars.csv"), col_types = 
    mpg = col_double(),
    cyl = col_integer(),
    disp = col_double(),
    hp = col_integer(),
    drat = col_double(),
    vs = col_integer(),
    wt = col_double(),
    qsec = col_double(),
    am = col_integer(),
    gear = col_integer(),
    carb = col_integer()

vignette("readr") gives more detail on how readr guess the column types, how you can override the defaults, and provides some useful tools for debugging parsing problems.


There are two main alternatives to readr: base R and data.table's fread(). The most important differences are discussed below.

Base R

Compared to the corresponding base functions, readr functions:

  • Use a consistent naming scheme for the parameters (e.g. col_names and col_types not header and colClasses).

  • Are much faster (up to 10x).

  • Leave strings as is by default, and automatically parse common date/time formats.

  • Have a helpful progress bar if loading is going to take a while.

  • All functions work exactly the same way regardless of the current locale. To override the US-centric defaults, use locale().

data.table and fread()

data.table has a function similar to read_csv() called fread. Compared to fread, readr functions:

  • Are slower (currently ~1.2-2x slower. If you want absolutely the best performance, use data.table::fread().

  • Use a slightly more sophisticated parser, recognising both doubled ("""") and backslash escapes ("\""), and can produce factors and date/times directly.

  • Forces you to supply all parameters, where fread() saves you work by automatically guessing the delimiter, whether or not the file has a header, and how many lines to skip.

  • Are built on a different underlying infrastructure. Readr functions are designed to be quite general, which makes it easier to add support for new rectangular data formats. fread() is designed to be as fast as possible.


Thanks to:

  • Joe Cheng for showing me the beauty of deterministic finite automata for parsing, and for teaching me why I should write a tokenizer.

  • JJ Allaire for helping me come up with a design that makes very few copies, and is easy to extend.

  • Dirk Eddelbuettel for coming up with the name!

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.