Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-32196][SQL] Extract In convertible part if it is not convertible #29013

Closed
wants to merge 6 commits into from

Conversation

ulysses-you
Copy link
Contributor

@ulysses-you ulysses-you commented Jul 6, 2020

What changes were proposed in this pull request?

Modify OptimizeIn, extract In convertible part if it is not convertible.
And a new config spark.sql.optimizer.inExtractLiteralPart to control if we should extract the literal part of In.

Why are the changes needed?

Try to optimize more predicate.
First split In to 2 parts, one is convertible the other is not convertible. Then we can optimize the convertible part.

A table create table t1 (c1 int, c2 int) using parquet

select * from t1 where c1 in (1, 2, c2)
=>
select * from t1 where c1 in (1, 2) or c1 in (c2)

select * from t1 where c1 in(1, c2)
=>
select * from t1 where c1 = 1 or c1 in(c2)

select * from t1 where c1 in(1, 2, ..., c2)
=> 
select * from t1 where c1 inset(1, 2, ...) or c1 in (c2)

Does this PR introduce any user-facing change?

No.

How was this patch tested?

Add ut.

@SparkQA
Copy link

SparkQA commented Jul 6, 2020

Test build #125092 has finished for PR 29013 at commit bbf63c9.

  • This patch fails to generate documentation.
  • This patch merges cleanly.
  • This patch adds no public classes.

@@ -91,21 +91,6 @@ class OptimizeInSuite extends PlanTest {
comparePlans(optimized, correctAnswer)
}

test("OptimizedIn test: In clause not optimized in case filter has attributes") {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this test since we support convert part of list and new test include this.

@maropu
Copy link
Member

maropu commented Jul 7, 2020

I don't look into the impl. though (I just read the PR description), you meant this case? Or, do I miss something?

scala> sql("select * from t1").show()
+---+---+
| c1| c2|
+---+---+
|  1|  3|
|  3|  3|
+---+---+


scala> sql("select * from t1 where c1 in (1, 2, c2)").show()
+---+---+
| c1| c2|
+---+---+
|  1|  3|
|  3|  3|
+---+---+


scala> sql("select * from (select * from t1 where c1 in (1, 2)) t2(c1, c2) where t2.c1 in (1, 2, t2.c2)").show()
+---+---+
| c1| c2|
+---+---+
|  1|  3|
+---+---+

@ulysses-you
Copy link
Contributor Author

ah morning @maropu

As your case after this pr, it changes to:

select * from t1 where c1 in (1, 2, c2)
 ||
 \/
select * from t1 where c1 in (1, 2) and c1 in (c2)

@maropu
Copy link
Member

maropu commented Jul 7, 2020

But, I think select * from t1 where c1 in (1, 2, c2) is equal to select * from t1 where c1=1 or c1=2 or c1=c2. Why can we transform it into where c1 in (1, 2) and c1 in (c2)?

@ulysses-you
Copy link
Contributor Author

I make a mistake, it should be

select * from t1 where c1 in (1, 2, c2)
 ||
 \/
select * from t1 where c1 in (1, 2) or c1 in (c2)

@SparkQA
Copy link

SparkQA commented Jul 7, 2020

Test build #125158 has finished for PR 29013 at commit f45d6c3.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jul 7, 2020

Test build #125148 has finished for PR 29013 at commit 0d982d2.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jul 8, 2020

Test build #125299 has finished for PR 29013 at commit 9bf23cc.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@maropu
Copy link
Member

maropu commented Jul 8, 2020

Why is where c1 in (1, 2) or c1 in (c2) an optimized form? You mean it is fater than where c1 in (1, 2, c2)? At least, I think you need to describe more in the PR description.

@ulysses-you
Copy link
Contributor Author

@maropu Once we have where c1 in (1, 2) or c1 in (c2), we can optimize the c1 in (1, 2) part with OptimizeIn.
e.g.

c1 in(1, c2) => c1 = 1 or c1 in(c2)

c1 in(1, 2, ..., c2) => c1 inset(1, 2, ...) or c1 in (c2)

dongjoon-hyun added a commit that referenced this pull request Jul 8, 2020
### What changes were proposed in this pull request?

This PR aims to disable SBT `unidoc` generation testing in Jenkins environment because it's flaky in Jenkins environment and not used for the official documentation generation. Also, GitHub Action has the correct test coverage for the official documentation generation.

- #28848 (comment) (amp-jenkins-worker-06)
- #28926 (comment) (amp-jenkins-worker-06)
- #28969 (comment) (amp-jenkins-worker-06)
- #28975 (comment) (amp-jenkins-worker-05)
- #28986 (comment)  (amp-jenkins-worker-05)
- #28992 (comment) (amp-jenkins-worker-06)
- #28993 (comment) (amp-jenkins-worker-05)
- #28999 (comment) (amp-jenkins-worker-04)
- #29010 (comment) (amp-jenkins-worker-03)
- #29013 (comment) (amp-jenkins-worker-04)
- #29016 (comment) (amp-jenkins-worker-05)
- #29025 (comment) (amp-jenkins-worker-04)
- #29042 (comment) (amp-jenkins-worker-03)

### Why are the changes needed?

Apache Spark `release-build.sh` generates the official document by using the following command.
- https://github.com/apache/spark/blob/master/dev/create-release/release-build.sh#L341

```bash
PRODUCTION=1 RELEASE_VERSION="$SPARK_VERSION" jekyll build
```

And, this is executed by the following `unidoc` command for Scala/Java API doc.
- https://github.com/apache/spark/blob/master/docs/_plugins/copy_api_dirs.rb#L30

```ruby
system("build/sbt -Pkinesis-asl clean compile unidoc") || raise("Unidoc generation failed")
```

However, the PR builder disabled `Jekyll build` and instead has a different test coverage.
```python
# determine if docs were changed and if we're inside the amplab environment
# note - the below commented out until *all* Jenkins workers can get `jekyll` installed
# if "DOCS" in changed_modules and test_env == "amplab_jenkins":
#    build_spark_documentation()
```

```
Building Unidoc API Documentation
========================================================================
[info] Building Spark unidoc using SBT with these arguments:
-Phadoop-3.2 -Phive-2.3 -Pspark-ganglia-lgpl -Pkubernetes -Pmesos
-Phadoop-cloud -Phive -Phive-thriftserver -Pkinesis-asl -Pyarn unidoc
```

### Does this PR introduce _any_ user-facing change?

No. (This is used only for testing and not used in the official doc generation.)

### How was this patch tested?

Pass the Jenkins without doc generation invocation.

Closes #29017 from dongjoon-hyun/SPARK-DOC-GEN.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
@SparkQA
Copy link

SparkQA commented Jul 12, 2020

Test build #125700 has finished for PR 29013 at commit 21c5262.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@github-actions
Copy link

We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable.
If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!

@github-actions github-actions bot added the Stale label Oct 21, 2020
@github-actions github-actions bot closed this Oct 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants