New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-32651][CORE] Decommission switch configuration should have the highest hierarchy #29466
Conversation
@cloud-fan @holdenk @agrawaldevesh Please take a look, thanks! |
Test build #127581 has finished for PR 29466 at commit
|
Test build #127582 has finished for PR 29466 at commit
|
Test build #127583 has finished for PR 29466 at commit
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any other configs to move to the top level ?
.doc("When decommission enabled, Spark will try its best to shutdown the executor " + | ||
s"gracefully. Spark will try to migrate all the RDD blocks (controlled by " + | ||
s"${STORAGE_DECOMMISSION_RDD_BLOCKS_ENABLED.key}) and shuffle blocks (controlled by " + | ||
s"${STORAGE_DECOMMISSION_SHUFFLE_BLOCKS_ENABLED.key}) from the decommissioning " + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: some 2 spaces here ... before from
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's only one space?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. Sorry. Fonts are misleading :-)
core/src/main/scala/org/apache/spark/internal/config/package.scala
Outdated
Show resolved
Hide resolved
I think no..I'll double check tomorrow. |
I can't find any other config too. LGTM. |
Test build #127591 has finished for PR 29466 at commit
|
thanks, merging to master! |
… highest hierarchy Rename `spark.worker.decommission.enabled` to `spark.decommission.enabled` and move it from `org.apache.spark.internal.config.Worker` to `org.apache.spark.internal.config.package`. Decommission has been supported in Standalone and k8s yet and may be supported in Yarn(apache#27636) in the future. Therefore, the switch configuration should have the highest hierarchy rather than belongs to Standalone's Worker. In other words, it should be independent of the cluster managers. No, as the decommission feature hasn't been released. Pass existed tests. Closes apache#29466 from Ngone51/fix-decom-conf. Authored-by: yi.wu <yi.wu@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
What changes were proposed in this pull request?
Rename
spark.worker.decommission.enabled
tospark.decommission.enabled
and move it fromorg.apache.spark.internal.config.Worker
toorg.apache.spark.internal.config.package
.Why are the changes needed?
Decommission has been supported in Standalone and k8s yet and may be supported in Yarn(#27636) in the future. Therefore, the switch configuration should have the highest hierarchy rather than belongs to Standalone's Worker. In other words, it should be independent of the cluster managers.
Does this PR introduce any user-facing change?
No, as the decommission feature hasn't been released.
How was this patch tested?
Pass existed tests.