New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

guess_max = Inf #588

Closed
Deleetdk opened this Issue Jan 26, 2017 · 6 comments

Comments

4 participants
@Deleetdk

Deleetdk commented Jan 26, 2017

Almost always when I read text files I want it to use all the rows to infer types. However, setting guess_max to Inf, NA, or -1 causes errors:

> orpha_genes = read_csv(orpha_genes_disk_file, guess_max = Inf)
Error in .Call("readr_guess_types_", PACKAGE = "readr", sourceSpec, tokenizerSpec,  : 
  negative length vectors are not allowed
In addition: Warning message:
In guess_types_(datasource, tokenizer, locale, n = n) :
  NAs introduced by coercion to integer range
> orpha_genes = read_csv(orpha_genes_disk_file, guess_max = NA)
Error in .Call("readr_guess_types_", PACKAGE = "readr", sourceSpec, tokenizerSpec,  : 
  negative length vectors are not allowed
> orpha_genes = read_csv(orpha_genes_disk_file, guess_max = -1)
Error in .Call("readr_guess_types_", PACKAGE = "readr", sourceSpec, tokenizerSpec,  : 
  negative length vectors are not allowed
> orpha_genes = read_csv(orpha_genes_disk_file, guess_max = NULL)
Error in guess_types_(datasource, tokenizer, locale, n = n) : 
  expecting a single value

Note that the error for NA is incorrect. NA is not a negative vector.

Instead I do something like this:

orpha_genes = read_csv(orpha_genes_disk_file, guess_max = 999999)

One has to be careful about how many 9's one adds, because if one adds too many one gets:

> read_csv("data/affy_diseases.csv", guess_max = 999999999999999999999999999)
Error in .Call("readr_guess_types_", PACKAGE = "readr", sourceSpec, tokenizerSpec,  : 
  negative length vectors are not allowed
In addition: Warning message:
In guess_types_(datasource, tokenizer, locale, n = n) :
  NAs introduced by coercion to integer range

I tried reducing the number of 9's until I stopped getting that error. At some point, my entire system crashed instead! Unfortunately, I don't know how many 9's I tried, so now I'm weary about trying again. I tried this one both my virtual Ubuntu 16.10 machine and on my Windows 8.1 host. Both systems crashed.

In general, this parameter seems to be bug-prone.

@jimhester jimhester added the bug label Jan 31, 2017

@jimhester

This comment has been minimized.

Member

jimhester commented Jan 31, 2017

Using all the rows to infer types means you are reading all of your data twice, in which case you can just read them in as characters and then use type_convert().

library(readr)
x <- read_csv(readr_example("mtcars.csv"), col_types = cols(.default = col_character()))
type_convert(x)
#> Parsed with column specification:
#> cols(
#>   mpg = col_double(),
#>   cyl = col_integer(),
#>   disp = col_double(),
#>   hp = col_integer(),
#>   drat = col_double(),
#>   wt = col_double(),
#>   qsec = col_double(),
#>   vs = col_integer(),
#>   am = col_integer(),
#>   gear = col_integer(),
#>   carb = col_integer()
#> )
#> # A tibble: 32 × 11
#>      mpg   cyl  disp    hp  drat    wt  qsec    vs    am  gear  carb
#>    <dbl> <int> <dbl> <int> <dbl> <dbl> <dbl> <int> <int> <int> <int>
#> 1   21.0     6 160.0   110  3.90 2.620 16.46     0     1     4     4
#> 2   21.0     6 160.0   110  3.90 2.875 17.02     0     1     4     4
#> 3   22.8     4 108.0    93  3.85 2.320 18.61     1     1     4     1
#> 4   21.4     6 258.0   110  3.08 3.215 19.44     1     0     3     1
#> 5   18.7     8 360.0   175  3.15 3.440 17.02     0     0     3     2
#> 6   18.1     6 225.0   105  2.76 3.460 20.22     1     0     3     1
#> 7   14.3     8 360.0   245  3.21 3.570 15.84     0     0     3     4
#> 8   24.4     4 146.7    62  3.69 3.190 20.00     1     0     4     2
#> 9   22.8     4 140.8    95  3.92 3.150 22.90     1     0     4     2
#> 10  19.2     6 167.6   123  3.92 3.440 18.30     1     0     4     4
#> # ... with 22 more rows

jimhester added a commit to jimhester/readr that referenced this issue Jan 31, 2017

jimhester added a commit to jimhester/readr that referenced this issue Jan 31, 2017

@jimhester jimhester added this to DONE in jimhester Feb 2, 2017

jimhester added a commit to jimhester/readr that referenced this issue Feb 9, 2017

jimhester added a commit that referenced this issue Feb 9, 2017

Better error handling for guess_max (#590)
* Better error handling for guess_max

Fixes #588

* More robust checks for guess_max

* Line up arguments in example

* Simplify boolean
@Deleetdk

This comment has been minimized.

Deleetdk commented Feb 9, 2017

Thanks.

So, to get the same as base read.csv functionality, it seems that one has to do something like the following:

x = read_csv("file.csv", col_types = cols(.default = col_character())) %>%
type_convert

This is only necessary for when the first 1000 rows don't provide sufficient material for guessing. A common use case in genetics involves reading large files (e.g. 1 million rows) of variant data (e.g. SNPs). Such datafiles are often sorted by chromosome. This means that the sex chromosomes come last (e.g. first x row appears at row 800k), and thus if one relies on the first 1000 rows by default to infer col types, the chromosome columns get parsed incorrectly as integers which cause the x, y and sometimes mt (mitochondria) data to be incorrect.

@hadley

This comment has been minimized.

Member

hadley commented Feb 9, 2017

A better approach is not to rely on automatic parsing, but to specify the col types correctly.

@jimhester

This comment has been minimized.

Member

jimhester commented Feb 9, 2017

As Hadley said if you know the input formats you shouldn't be relying on guessing, explicitly read that column as character or factor.

x = read_csv("file.csv", col_types = cols(chr=col_character(), start = col_integer(), end = col_integer())

Alternatively use a specialized function for genomic data such as rtracklayer::import(), which will likely read the data faster than the general functions in readr and use data structures like GRanges useful for other genomic analysis in R.

@jabowery

This comment has been minimized.

jabowery commented Nov 26, 2017

Deleetdk is, in effect, requesting that there be an option to go ahead do what it takes to infer correctly and trust that we'll read the caveats.

Use case:

Gapminder has a 500+ global indicators, one (numeric) indicator per file. I'm creating a unified file that would be of great benefit to CRAN's collection of data sets and don't want to go through each file to figure out whether to force a column to be <int> or <double> -- but still would like for those columns that really are <int> to remain <int> in the final product.

I finally figured out that I could use the guess_max set to some arbitrarily high number -- after running into a documentation failure for readr I just posted. Very annoying.

@lock

This comment has been minimized.

lock bot commented Sep 25, 2018

This old issue has been automatically locked. If you believe you have found a related problem, please file a new issue (with reprex) and link to this issue. https://reprex.tidyverse.org/

@lock lock bot locked and limited conversation to collaborators Sep 25, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.