-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid loading whole file into memory with load_operator for schema detection #805
Conversation
…tection We're loading whole file into memory and converting it into a byte stream for schema auto-detection. As a result, jobs with large files e.g. 5GB and higher are getting killed. Opening this for discussion if we really need to load the whole file into memory and if not needed just return the stream so that pandas dataframe can fetch only the initial rows of a file for schema autodetection.
for more information, see https://pre-commit.ci
5930a2e
to
dbb4dd9
Compare
Codecov ReportBase: 93.10% // Head: 93.12% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## main #805 +/- ##
==========================================
+ Coverage 93.10% 93.12% +0.01%
==========================================
Files 47 47
Lines 2046 2051 +5
Branches 256 257 +1
==========================================
+ Hits 1905 1910 +5
Misses 109 109
Partials 32 32
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
I think inferring schema will be an issue with binary files and JSON which are large in size since we can only make sense of the file once we have the entire file downloaded to a worker node and then we use it to create Schema. But text format files can be read partially therefore no need to download them entirely which saves a lot of time. WDYT? Also, any suggestions to seed things for binary files? |
We don't infer schema for them and ask users to pass it explicitly 🤷 -- loading an entire file in memory is a recipe for disaster or you can get filesize of the file and depending on that decide but explaining that in docs become a little difficult |
Not having this chage is degrading the performace for bigquery, snowflake and redshift we will fix if there is any performace degradation for postgres.
Publish Redshift benchmark results for native and default approach. The existing dataset failed with schema mismatch error while inserting rows. The pandas auto-detection created schema with columns as `varchar(256)` limiting the values to be 256 bytes long. However, some row contained a value larger than 256 bytes and then it complained with `value too long` error. Hence, we have created new fake data set and kept in S3 with data of required various sizes for benchmarking purposes and have also updated the `datasets.md` to provide details of this new files. blocked by: #805 closes: #748 Co-authored-by: Utkarsh Sharma <utkarsharma2@gmail.com>
Description
What is the current behavior?
We're loading whole file into memory and converting it into a byte stream for schema auto-detection. As a result, jobs with large files e.g. 5GB and higher are getting killed.
Opening this for discussion if we really need to load the whole file into memory and if not needed just return the stream so that pandas dataframe can fetch only the initial rows of a file for schema autodetection.
closes: #803
closes: #826
What is the new behavior?
Do not load the whole file into memory but just return the smart_open stream to be used by the dataframe.
Does this introduce a breaking change?
No. Tested with postgres & Redshift.