Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-32651][CORE] Decommission switch configuration should have the highest hierarchy #29466

Closed
wants to merge 5 commits into from

Conversation

Ngone51
Copy link
Member

@Ngone51 Ngone51 commented Aug 18, 2020

What changes were proposed in this pull request?

Rename spark.worker.decommission.enabled to spark.decommission.enabled and move it from org.apache.spark.internal.config.Worker to org.apache.spark.internal.config.package.

Why are the changes needed?

Decommission has been supported in Standalone and k8s yet and may be supported in Yarn(#27636) in the future. Therefore, the switch configuration should have the highest hierarchy rather than belongs to Standalone's Worker. In other words, it should be independent of the cluster managers.

Does this PR introduce any user-facing change?

No, as the decommission feature hasn't been released.

How was this patch tested?

Pass existed tests.

@Ngone51
Copy link
Member Author

Ngone51 commented Aug 18, 2020

@cloud-fan @holdenk @agrawaldevesh Please take a look, thanks!

@SparkQA
Copy link

SparkQA commented Aug 18, 2020

Test build #127581 has finished for PR 29466 at commit 7b40f39.

  • This patch fails to build.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Aug 18, 2020

Test build #127582 has finished for PR 29466 at commit d6bbfb1.

  • This patch fails to build.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Aug 18, 2020

Test build #127583 has finished for PR 29466 at commit 3fa4ce5.

  • This patch fails to build.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Contributor

@agrawaldevesh agrawaldevesh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any other configs to move to the top level ?

.doc("When decommission enabled, Spark will try its best to shutdown the executor " +
s"gracefully. Spark will try to migrate all the RDD blocks (controlled by " +
s"${STORAGE_DECOMMISSION_RDD_BLOCKS_ENABLED.key}) and shuffle blocks (controlled by " +
s"${STORAGE_DECOMMISSION_SHUFFLE_BLOCKS_ENABLED.key}) from the decommissioning " +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: some 2 spaces here ... before from

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's only one space?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. Sorry. Fonts are misleading :-)

@Ngone51
Copy link
Member Author

Ngone51 commented Aug 18, 2020

Any other configs to move to the top level ?

I think no..I'll double check tomorrow.

@agrawaldevesh
Copy link
Contributor

Any other configs to move to the top level ?

I think no..I'll double check tomorrow.

I can't find any other config too. LGTM.

@SparkQA
Copy link

SparkQA commented Aug 18, 2020

Test build #127591 has finished for PR 29466 at commit c4c0ef4.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

thanks, merging to master!

@cloud-fan cloud-fan closed this in 3092527 Aug 19, 2020
holdenk pushed a commit to holdenk/spark that referenced this pull request Oct 27, 2020
… highest hierarchy

Rename `spark.worker.decommission.enabled` to `spark.decommission.enabled` and move it from `org.apache.spark.internal.config.Worker` to `org.apache.spark.internal.config.package`.

Decommission has been supported in Standalone and k8s yet and may be supported in Yarn(apache#27636) in the future. Therefore, the switch configuration should have the highest hierarchy rather than belongs to Standalone's Worker. In other words, it should be independent of the cluster managers.

No, as the decommission feature hasn't been released.

Pass existed tests.

Closes apache#29466 from Ngone51/fix-decom-conf.

Authored-by: yi.wu <yi.wu@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants