chore: change parquet_fast_read_bytes setting from 0 to 16MB #15212
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I hereby agree to the terms of the CLA available at: https://docs.databend.com/dev/policies/cla/
Summary
When copying from the Parquet stage, we default to reading metadata each time, which is not efficient for small Parquet files (less than 16MB) due to the cost of reading metadata.
This PR sets
parquet_fast_read_bytes
default value to 16MB, meaning if a parquet file is less than 16MB, we won't read the metadata; instead, we'll read the entire file.Relate Code:
1.
databend/src/query/storages/parquet/src/parquet_rs/parquet_table/partition.rs
Lines 79 to 92 in c2e3ee6
databend/src/query/storages/parquet/src/parquet_rs/parquet_table/partition.rs
Lines 136 to 155 in c2e3ee6
Tests
Type of change
This change is