Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid loading whole file into memory with load_operator for schema detection #805

Merged
merged 17 commits into from
Sep 14, 2022

Conversation

pankajkoti
Copy link
Contributor

@pankajkoti pankajkoti commented Sep 8, 2022

Description

What is the current behavior?

We're loading whole file into memory and converting it into a byte stream for schema auto-detection. As a result, jobs with large files e.g. 5GB and higher are getting killed.
Opening this for discussion if we really need to load the whole file into memory and if not needed just return the stream so that pandas dataframe can fetch only the initial rows of a file for schema autodetection.

closes: #803
closes: #826

What is the new behavior?

Do not load the whole file into memory but just return the smart_open stream to be used by the dataframe.

Does this introduce a breaking change?

No. Tested with postgres & Redshift.

pankajkoti and others added 3 commits September 8, 2022 20:01
…tection

We're loading whole file into memory and converting it into a byte stream
for schema auto-detection. As a result, jobs with large files e.g. 5GB and
higher are getting killed.
Opening this for discussion if we really need to load the whole file into
memory and if not needed just return the stream so that pandas dataframe
can fetch only the initial rows of a file for schema autodetection.
@pankajkoti pankajkoti force-pushed the 803-schema-detection-optimisation branch from 5930a2e to dbb4dd9 Compare September 8, 2022 14:31
@codecov
Copy link

codecov bot commented Sep 8, 2022

Codecov Report

Base: 93.10% // Head: 93.12% // Increases project coverage by +0.01% 🎉

Coverage data is based on head (7fe5806) compared to base (abea062).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #805      +/-   ##
==========================================
+ Coverage   93.10%   93.12%   +0.01%     
==========================================
  Files          47       47              
  Lines        2046     2051       +5     
  Branches      256      257       +1     
==========================================
+ Hits         1905     1910       +5     
  Misses        109      109              
  Partials       32       32              
Impacted Files Coverage Δ
python-sdk/src/astro/databases/google/bigquery.py 94.41% <ø> (ø)
python-sdk/src/astro/files/base.py 94.02% <100.00%> (-0.50%) ⬇️
python-sdk/src/astro/files/types/ndjson.py 100.00% <100.00%> (ø)
python-sdk/src/astro/files/types/parquet.py 100.00% <100.00%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

python-sdk/src/astro/files/base.py Show resolved Hide resolved
@utkarsharma2
Copy link
Collaborator

I think inferring schema will be an issue with binary files and JSON which are large in size since we can only make sense of the file once we have the entire file downloaded to a worker node and then we use it to create Schema. But text format files can be read partially therefore no need to download them entirely which saves a lot of time. WDYT?

Also, any suggestions to seed things for binary files?

cc: @kaxil @dimberman @pankajkoti

@kaxil
Copy link
Collaborator

kaxil commented Sep 9, 2022

I think inferring schema will be an issue with binary files and JSON which are large in size since we can only make sense of the file once we have the entire file downloaded to a worker node and then we use it to create Schema. But text format files can be read partially therefore no need to download them entirely which saves a lot of time. WDYT?

Also, any suggestions to seed things for binary files?

cc: @kaxil @dimberman @pankajkoti

We don't infer schema for them and ask users to pass it explicitly 🤷 -- loading an entire file in memory is a recipe for disaster or you can get filesize of the file and depending on that decide but explaining that in docs become a little difficult

tatiana added a commit that referenced this pull request Sep 13, 2022
@utkarsharma2 utkarsharma2 dismissed dimberman’s stale review September 14, 2022 10:27

Not having this chage is degrading the performace for bigquery, snowflake and redshift we will fix if there is any performace degradation for postgres.

@utkarsharma2 utkarsharma2 merged commit 80deceb into main Sep 14, 2022
@utkarsharma2 utkarsharma2 deleted the 803-schema-detection-optimisation branch September 14, 2022 18:12
utkarsharma2 added a commit that referenced this pull request Sep 14, 2022
Publish Redshift benchmark results for native and default approach.

The existing dataset failed with schema mismatch error while inserting
rows.
The pandas auto-detection created schema with columns as `varchar(256)`
limiting
the values to be 256 bytes long. However, some row contained a value
larger than 256
bytes and then it complained with `value too long` error.
Hence, we have created new fake data set and kept in S3 with data
of required various sizes for benchmarking purposes and have also
updated
the `datasets.md` to provide details of this new files.

blocked by: #805
closes: #748

Co-authored-by: Utkarsh Sharma <utkarsharma2@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
6 participants