-
Notifications
You must be signed in to change notification settings - Fork 28.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-17636][SPARK-25557][SQL] Parquet and ORC predicate pushdown in nested fields #27155
Conversation
Thank you for making a PR, @emaynardigs |
Yep, failure related, I was mistakenly only testing locally against the v2 ORC code path before but Jenkins failed on the v1 |
ok to test |
Test build #116456 has finished for PR 27155 at commit
|
Retest this please. |
Test build #116479 has finished for PR 27155 at commit
|
Test build #116501 has finished for PR 27155 at commit
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First of all, you must add @dbtsai 's authorship by add a commit with his authorship.
The following is not a standard way to keep the authorship.
Firstly, much of this PR is a rebase of #22535, much thanks to @dbtsai for his work.
Second, you need to address all the existing comment in the original PR. In the PR description, could you explain what is the improvement here from the original PR? If there is nothing new here, we had better close this one and asking @dbtsai to update his original PR.
Hi @dongjoon-hyun, thanks for the feedback! Actually, when viewing the original PR I was unaware that @dbtsai is a member and, in fact, your colleague. Your concern definitely makes sense. Firstly, I should say that there are actually no commits here under the original author's ownership; the code has diverged to the point now where, while I took some ideas & code from the original PR, it was easier to do all of this manually. I called this a "rebase", but it is a rebase only in the abstract sense that a lot of code was copied and updated for the latest master and not in the sense of any source control. Secondly, I'll try to elaborate on why I opened a new PR...
As you pointed out, I have addressed the comments in the original PR. I've extended the functionality to ORC as well as Parquet, tested the functionality myself, and have written more unit tests (largely copied from the original PR) and am currently writing more pending the approval of the basic code here. I would not say there is nothing new here. |
@emaynardigs . In this case, usually, we close the second PR (yours) because the original is still alive. You can retry this after the first one is closed.
|
I'll leave this PR to @dbtsai . |
Yes, surely we should close one PR. The other one is inactive, fails tests, and doesn’t merge cleanly. This one has none of those issues and has more functionality. I don’t mind closing this one out if the other PR can get us to the same place just as quickly, but that seems like it would take more work at this point? Either way, let’s make sure there is an active PR for this issue which can be merged in. As I have no control over any other PR this is my submission towards that end. |
@dongjoon-hyun I see this has changes requested, do I need to make any changes here? Or is this just pending review? |
Hello @emaynardigs , Thank you for your contribution, and I do value your work a lot. In fact, at Apple, we are still using an updated version of #22535 which is critical to our production job. As far as I know, Databirkcs's runtime also has an implementation with similar approach to tackle this issue. The reason why I am inactive on my previous PR is that I feel adding nested support to the current filter api is a short term solution since the design doesn't consider this complex use-cases. For a better long term solution, I would like to create a new set of FilterV2 apis in DSv2 framework that makes nested columns as first class support. + @cloud-fan @rdblue @viirya for feedback on this. I already started to work on FilterV2 api, and here is WIP code https://github.com/dbtsai/spark/pull/10/files . The change is bigger than I thought, and now, I am debating do we actually need a new FilterV2 framework? Feedback and idea are welcome. Thanks. |
Hey @dbtsai no worries, actually I suspected the silence was because you had moved this into a fork and were running with it :) Actually I think the core approach you took here is sufficient for most cases, right? The crux of my change was only porting it to the new APIs and looking at the schema itself to unpack nested columns instead of looking at the column name (needed this for ORC anyway). Then it was pretty easy to add ORC support as we use a fork of ORC internally while you guys use Parquet. What complex cases do you think break under this PR? |
@dbtsai checking in again -- is there an edge case that you think doesn't work here? It would be nice to have updated filters, but seeing as you yourself are running code very much like this in a fork, wouldn't the right thing to do be to merge it upstream? |
@dongjoon-hyun @dbtsai pinging again for review; it doesn't seem there is any progress on another PR and as @dbtsai pointed out these performance improvements can be very helpful for production workloads in their current state. |
@emaynardigs I have been distracted by other work, and finally I found some time to continue this work. The other approach mentioned above will take longer time, so I'm thinking to submit a PR based on our internal version (a modified version of #22535) which is proven to be stable and in production for awhile. I need some time to do some cleanup, and I'll submit a PR so we can collaborate. I'll add you as an author for the collaboration. WDYT? BTW, are you using this internally at your company? How does it perform? Thanks. |
My main reservation is that #22535 relies on a dot in the name of the field, and so cannot support ORC. The key difference in this PR is that it actually inspects the type of the field and extends the same functionality to Parquet and ORC. It's also rebased for the current master already and so merges cleanly. I also added more tests. If we merge #22535 we'll need another PR to un-do this logic and implement the same for ORC again. No, I intended to cherry pick this back after it merges, but if it doesn't get merge we'll probably end up using it on our fork much like you've done. |
A new PR is submitted #27728 can you take a look? We can add ORC implementation on top of that once the PR is merged. |
@emaynardigs #27728 is merged. Are you interested in rebase this PR based on that? Should not be hard to support ORC now as we have a proper framework to support nested predicate pushdown. |
@dbtsai That PR still relies on a dot in the column name, as I called out above. Not sure why you don't just parse the schema, like I was already doing in this PR? I may rebase, but as we'renon a 2.x fork any dependency on the v2 filters shipping in 3.x isn't really useful. |
In this PR, you also use The implementation of each data source can be different. I choose to use key as a string containing
|
What changes were proposed in this pull request?
Firstly, much of this PR is a rebase of #22535, much thanks to @dbtsai for his work.
Spark can now push down predicates on struct columns when reading Parquet and ORC tables.
Why are the changes needed?
There are significant performance gains to be had from pushing down predicates.
Does this PR introduce any user-facing change?
No
How was this patch tested?
Existing UT were extended to cover the new functionality.
Sanity check tests:
Note the significant performance improvement and the inclusion of the filter in
PushedFilters
in both cases.