Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

project_column_with_filters_that_cant_pushed_down_always_true only passes if plan is optimized twice #3283

Closed
andygrove opened this issue Aug 29, 2022 · 4 comments · Fixed by #3439
Labels
bug Something isn't working

Comments

@andygrove
Copy link
Member

andygrove commented Aug 29, 2022

Describe the bug
In execute_to_batches, we create the physical plan from an optimized logical plan. This results in the plan being optimized twice. Fixing this causes a regression in project_column_with_filters_that_cant_pushed_down_always_true.

ArrowError(InvalidArgumentError(\"must either specify a row count or at least one column\")) at Executing physical plan for 'select * from (select 1 as a) f where f.a=1;'

To Reproduce
Modify execute_to_batches to pass the unoptimized logical plan to create_physical_plan.

Expected behavior
Test should pass

Additional context
None

@andygrove andygrove added the bug Something isn't working label Aug 29, 2022
@isidentical
Copy link
Contributor

This is interesting, if you haven't already started @andygrove I'd like to take a look at it (by initially fixing the underlying bug in the test and then fixing the test logic).

@avantgardnerio
Copy link
Contributor

This is a really interesting aspect of optimizer rules I've encountered before, and am not sure how to best address:

  1. We can run them in a particular order - this is brittle and changing the order or running them twice can break things. They become very hard-coded to rely on that order over time.
  2. We could run them iteratively until we detect no change to the plan, or we reach a "max depth" where we give up and just run the plan.

I'm curious how other optimizers handle this.

@isidentical
Copy link
Contributor

Aside from the point @avantgardnerio shared (regarding how new optimizations could be uncovered after different passes, which I would think deserves its own issue), the problem here might be a little bit different.

Something particular I noticed was that, when datafusion.execution.coalesce_batches is set to false this issue does not appear (the plan works perfectly fine). So this feels like a bug in the CoalesceBatchesExec implementation which suddenly got uncovered with the new optimization. I'll try to make a PR for just the bug (and the fix inside the test system) fix.

@alamb
Copy link
Contributor

alamb commented Sep 12, 2022

This is a really interesting aspect of optimizer rules I've encountered before, and am not sure how to best address:

Yes, we should file a ticket to discuss this -- I'll try if no one else beats me to it

I'm curious how other optimizers handle this.

I have worked on systems in the past where the order was hard coded. Someone I know from the spark team says it runs until it reaches a "fixed point" (aka until running it doesn't change the plans). I think this is also an interesting question the context of constant evaluation / expression simplification

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants