-
Notifications
You must be signed in to change notification settings - Fork 841
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Permit parallel fetching of column chunks in ParquetRecordBatchStream
#2110
Comments
I think prior to fetching in parallel I would suggest the following:
|
Why would you fall back to fetching the entire row group (or file) instead of fetching in parallel. For object storage fetching the entire file would probably be implemented as parallel range requests anyway. |
Mainly because there is a monetary cost associated with each S3 range request. I'm also not sure that fetching in parallel will actually be faster, as I would expect a single request to be able to saturate IO easily, fetching in parallel may just add complexity, cost, and potentially contention. |
Right, there is definitely a cost but it's typically well worth it. In our workloads we parallelize object GETs into multiple 8-16MB range requests. That is good for 1-3x improvement in downloads speeds under real-world conditions. |
Provided the maximum concurrency is configurable, sounds good to me. I look forward to experimenting with it and measuring its impact |
@tustvold Ok, I understand why you were worried about contention. Sine we're dealing with Instead what if we add
This makes things quite straightforward in |
Related DataFusion PR: apache/datafusion#2946 |
I think adding a get_byte_ranges with a default implementation that falls back to serial sounds good to me. Then downstreams can override it if they wish to do so, and we don't introduce a breaking API change 👍 |
Sorry for being late to the party. Thanks for bringing this topic up @thinkharderdev
I think this is a great idea (and I think @thinkharderdev has already done it here): apache/datafusion#2946
Given that the optimal prefetch strategy is likely to vary project to project and with object store to object store, I feel like it will be challenging to put logic that works everywhere in DataFusion. Ideally in my mind DataFusion would provide enough information to allow downstream consumers to efficiently implement whatever prefetch strategy they wanted. For example, perhaps we could implement something like If adding |
Yeah, I think from the point of view of |
ParquetRecordBatchStream
ParquetRecordBatchStream
ParquetRecordBatchStream
ParquetRecordBatchStream
ParquetRecordBatchStream
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
(This section helps Arrow developers understand the context and why for this feature, in addition to the what)
We recently rebased our project onto DataFusion's latest and we've seen a pretty big performance degradation. The issue is that we've lost the ability to prefetch entire files from object storage with the new
ObjectStore
interface. The buffered prefetch has been moved intoParquetRecordBatchStream
but in a way that doesn't work particularly well for object storage (at least in our case).The main issues that we've seen are:
What we found was that (at least with parquet files on the order of 100-200MB) it was much more efficient to just fetch the entire object into memory. All else equal it is of course better to read less from object storage but if we can't do it in one shot (or maybe two shots) the cost of the extra GET requests is going to significantly outweigh the benefit of fetching less data.
Describe the solution you'd like
A clear and concise description of what you want to happen.
I think there are couple things we can do:
metadata_size_hint
in as a param maybe to allow users to provide information about the expected size. And maybe it defaults to 64k as it was before.ParquetRecordBatchStream
or maybe added into theObjectStore
trait by adding aget_ranges(&self, location: &Path, ranges: Vec<Range<usize>>) -> Result<Bytes>
method.Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
We could leave things as they are
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: