Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refine kyuubi extension config docs #638

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,16 @@ object KyuubiSQLConf {

val INSERT_REPARTITION_BEFORE_WRITE =
buildConf("spark.sql.optimizer.insertRepartitionBeforeWrite.enabled")
.doc("Add repartition node at the top of plan. A approach of merging small files.")
.doc("Add repartition node at the top of query plan. An approach of merging small files.")
.version("1.2.0")
.booleanConf
.createWithDefault(true)

val INSERT_REPARTITION_NUM =
buildConf("spark.sql.optimizer.insertRepartitionNum")
.doc(s"The partition number if ${INSERT_REPARTITION_BEFORE_WRITE.key} is enabled. " +
s"If AQE is disabled, the default value is ${SQLConf.SHUFFLE_PARTITIONS}. " +
s"If AQE is enabled, the default value is none that means depend on AQE.")
s"If AQE is disabled, the default value is ${SQLConf.SHUFFLE_PARTITIONS.key}. " +
"If AQE is enabled, the default value is none that means depend on AQE.")
.version("1.2.0")
.intConf
.createOptional
Expand All @@ -42,25 +42,25 @@ object KyuubiSQLConf {
buildConf("spark.sql.optimizer.dynamicPartitionInsertionRepartitionNum")
.doc(s"The partition number of each dynamic partition if " +
s"${INSERT_REPARTITION_BEFORE_WRITE.key} is enabled. " +
s"We will repartition by dynamic partition columns to reduce the small file but that " +
s"can cause data skew. This config is to extend the partition of dynamic " +
s"partition column to avoid skew but may generate some small files.")
"We will repartition by dynamic partition columns to reduce the small file but that " +
"can cause data skew. This config is to extend the partition of dynamic " +
"partition column to avoid skew but may generate some small files.")
.version("1.2.0")
.intConf
.createWithDefault(100)

val FORCE_SHUFFLE_BEFORE_JOIN =
buildConf("spark.sql.optimizer.forceShuffleBeforeJoin.enabled")
.doc("Ensure shuffle node exists before shuffled join (shj and smj) to make AQE " +
"`OptimizeSkewedJoin` works (extra shuffle, multi table join).")
"`OptimizeSkewedJoin` works (complex scenario join, multi table join).")
.version("1.2.0")
.booleanConf
.createWithDefault(false)

val FINAL_STAGE_CONFIG_ISOLATION =
buildConf("spark.sql.optimizer.finalStageConfigIsolation.enabled")
.doc("If true, the final stage support use different config with previous stage. The final " +
"stage config key prefix should be `spark.sql.finalStage.`." +
.doc("If true, the final stage support use different config with previous stage. " +
"The prefix of final stage config key should be `spark.sql.finalStage.`." +
"For example, the raw spark config: `spark.sql.adaptive.advisoryPartitionSizeInBytes`, " +
"then the final stage config should be: " +
"`spark.sql.finalStage.adaptive.advisoryPartitionSizeInBytes`.")
Expand Down