Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-27083][SQL][FOLLOW-UP] Rename spark.sql.subquery.reuse to spark.sql.execution.subquery.reuse.enabled #27346

Closed
wants to merge 1 commit into from

Conversation

gatorsmile
Copy link
Member

@gatorsmile gatorsmile commented Jan 23, 2020

What changes were proposed in this pull request?

This PR is to rename spark.sql.subquery.reuse to spark.sql.execution.subquery.reuse.enabled

Why are the changes needed?

Make it consistent and clear.

Does this PR introduce any user-facing change?

N/A. This is a new conf added in Spark 3.0

How was this patch tested?

N/A

@gatorsmile
Copy link
Member Author

cc @cloud-fan @maropu

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, LGTM.

@gatorsmile
Copy link
Member Author

Thanks! Merged to master.

@SparkQA
Copy link

SparkQA commented Jan 23, 2020

Test build #117320 has finished for PR 27346 at commit cebb770.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@@ -1053,7 +1053,7 @@ object SQLConf {
.booleanConf
.createWithDefault(true)

val SUBQUERY_REUSE_ENABLED = buildConf("spark.sql.subquery.reuse")
val SUBQUERY_REUSE_ENABLED = buildConf("spark.sql.execution.subquery.reuse.enabled")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now this is inconsistent with the exchange reuse: spark.sql.exchange.reuse

cc @rxin @maryannxue

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems we currently have the three patterns for boolean configs;

  • spark.sql.xxx.enabled, e.g., spark.sql.optimizer.dynamicPartitionPruning.enabled
  • spark.sql.enableXXX, e.g., spark.sql.inMemoryColumnarStorage.enableVectorizedReader
  • the names without enabled, e.g., spark.sql.exchange.reuse

We need a consistent rule for these boolean configs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
5 participants