Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't error on empty files -- warn & skip #2898

Closed
MichaelChirico opened this issue May 22, 2018 · 3 comments
Closed

Don't error on empty files -- warn & skip #2898

MichaelChirico opened this issue May 22, 2018 · 3 comments
Labels
Milestone

Comments

@MichaelChirico
Copy link
Member

@MichaelChirico MichaelChirico commented May 22, 2018

I'm trying to read a file that was output from parquet to csv(s) from Spark.

In its infinite wisdom, Spark created some empty files. So this work flow failed:

read_f = list.files('path/to/csvs', pattern = 'csv$', full.names = TRUE)
DT = rbindlist(lapply(read_f, fread))

It's kind of a pain to have to single out empty files (basically add the line read_f = read_f[file.info(read_f)$size > 0]) when the vast majority of the time this operation works as intended (since it's rare for spark to output empty files) -- is there any reason fread can't just warn for such a file and skip?

@jangorecki
Copy link
Member

@jangorecki jangorecki commented May 22, 2018

+1, also please link issue created (found) in spark jira or whatever they use so we can have link in documentation related to that change.

@MichaelChirico
Copy link
Member Author

@MichaelChirico MichaelChirico commented May 22, 2018

Not sure I follow -- are you suggesting I file an issue with them? This didn't come from any particular issue.

As I understand it, the root cause here is I'm reading from a parquet directory that has a bunch of empty constituent files. So when I coalesce there's still a partition with no actual data in it. Not sure how to overcome this, so I'm stuck with empty files for now. (that is to say, I don't think this is a bug on Spark's part, per se)

@jangorecki
Copy link
Member

@jangorecki jangorecki commented May 22, 2018

even if it is not a bug but just limitation in spark, it is good to have reference to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants