-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-30821][K8S]Handle executor failure with multiple containers #29924
Conversation
Added a spark property spark.kubernetes.executor.checkAllContainers, with default being false. When it's true, the executor snapshot will take all containers in the executor into consideration when deciding whether the executor is in "Running" state, if the pod restart policy is "Never". Also, added the new spark property to the doc.
@holdenk Could you review this PR, please? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Jenkins ok to test. Sorry for the delay. tentative LGTM pending CI/jenkins.
...rnetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshot.scala
Outdated
Show resolved
Hide resolved
@@ -59,11 +65,19 @@ object ExecutorPodsSnapshot extends Logging { | |||
case "pending" => | |||
PodPending(pod) | |||
case "running" => | |||
PodRunning(pod) | |||
if (shouldCheckAllContainers && | |||
"Never" == pod.getSpec.getRestartPolicy && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good addition 👍
@holdenk Made the fix about the indentation. Please take a look again. Thanks! |
Also is Jenkins triggered? Looks like the bot didn't make any comment to this PR. |
Jenkins OK to test |
huh weird. Jenkins test this please. |
cc @SparkQA |
Filed a ticket to @shaneknapp ( https://issues.apache.org/jira/browse/SPARK-33151 ) since it seems like Jenkins might just be stuck. |
@holdenk Thank you for the help. I do see that the Jenkins status is at "Asked to test", but it does not make any progress. |
Test build #129762 has finished for PR 29924 at commit
|
Kubernetes integration test starting |
Kubernetes integration test status success |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. If no one has concerns I'll try and merge tomorrow. Thanks for working on this @huskysun :)
Thank you @holdenk! |
@holdenk Thank you for the review. Please merge it if you get time. |
@holdenk Please help us to merge this change if you have time. Thank you. |
Hi @holdenk sorry for keeping bugging you. Any updates on this? Is there any further work needed behind the curtain (like release cadence or something) that prevents this from being merged? Thanks! |
Retest this please. |
Test build #130215 has finished for PR 29924 at commit
|
Kubernetes integration test starting |
Kubernetes integration test status success |
Meregd and backported as a fix to branch-3. |
@huskysun what is your JIRA account handle? |
Handle executor failure with multiple containers Added a spark property spark.kubernetes.executor.checkAllContainers, with default being false. When it's true, the executor snapshot will take all containers in the executor into consideration when deciding whether the executor is in "Running" state, if the pod restart policy is "Never". Also, added the new spark property to the doc. ### What changes were proposed in this pull request? Checking of all containers in the executor pod when reporting executor status, if the `spark.kubernetes.executor.checkAllContainers` property is set to true. ### Why are the changes needed? Currently, a pod remains "running" as long as there is at least one running container. This prevents Spark from noticing when a container has failed in an executor pod with multiple containers. With this change, user can configure the behavior to be different. Namely, if any container in the executor pod has failed, either the executor process or one of its sidecars, the pod is considered to be failed, and it will be rescheduled. ### Does this PR introduce _any_ user-facing change? Yes, new spark property added. User is now able to choose whether to turn on this feature using the `spark.kubernetes.executor.checkAllContainers` property. ### How was this patch tested? Unit test was added and all passed. I tried to run integration test by following the instruction [here](https://spark.apache.org/developer-tools.html) (section "Testing K8S") and also [here](https://github.com/apache/spark/blob/master/resource-managers/kubernetes/integration-tests/README.md), but I wasn't able to run it smoothly as it fails to talk with minikube cluster. Maybe it's because my minikube version is too new (I'm using v1.13.1)...? Since I've been trying it for two days and still can't make it work, I decided to submit this PR and hopefully the Jenkins test will pass. Closes #29924 from huskysun/exec-sidecar-failure. Authored-by: Shiqi Sun <s.sun@salesforce.com> Signed-off-by: Holden Karau <hkarau@apple.com> (cherry picked from commit f659527) Signed-off-by: Holden Karau <hkarau@apple.com>
Thanks for merging it in! I just created a JIRA account, and my username is
"huskysun" and full name is "Shiqi Sun". Let me know if these work, thanks.
…On Sat, Oct 24, 2020 at 10:01 AM Holden Karau ***@***.***> wrote:
@huskysun <https://github.com/huskysun> what is your JIRA account handle?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#29924 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFON6VOGD3WQKW6EEYXLYHLSMMB65ANCNFSM4SAW6Z5A>
.
|
Handle executor failure with multiple containers Added a spark property spark.kubernetes.executor.checkAllContainers, with default being false. When it's true, the executor snapshot will take all containers in the executor into consideration when deciding whether the executor is in "Running" state, if the pod restart policy is "Never". Also, added the new spark property to the doc. ### What changes were proposed in this pull request? Checking of all containers in the executor pod when reporting executor status, if the `spark.kubernetes.executor.checkAllContainers` property is set to true. ### Why are the changes needed? Currently, a pod remains "running" as long as there is at least one running container. This prevents Spark from noticing when a container has failed in an executor pod with multiple containers. With this change, user can configure the behavior to be different. Namely, if any container in the executor pod has failed, either the executor process or one of its sidecars, the pod is considered to be failed, and it will be rescheduled. ### Does this PR introduce _any_ user-facing change? Yes, new spark property added. User is now able to choose whether to turn on this feature using the `spark.kubernetes.executor.checkAllContainers` property. ### How was this patch tested? Unit test was added and all passed. I tried to run integration test by following the instruction [here](https://spark.apache.org/developer-tools.html) (section "Testing K8S") and also [here](https://github.com/apache/spark/blob/master/resource-managers/kubernetes/integration-tests/README.md), but I wasn't able to run it smoothly as it fails to talk with minikube cluster. Maybe it's because my minikube version is too new (I'm using v1.13.1)...? Since I've been trying it for two days and still can't make it work, I decided to submit this PR and hopefully the Jenkins test will pass. Closes apache#29924 from huskysun/exec-sidecar-failure. Authored-by: Shiqi Sun <s.sun@salesforce.com> Signed-off-by: Holden Karau <hkarau@apple.com> (cherry picked from commit f659527) Signed-off-by: Holden Karau <hkarau@apple.com>
…rs` by default ### What changes were proposed in this pull request? This PR aims to enable `spark.kubernetes.executor.checkAllContainers` by default from Apache Spark 4.0.0. ### Why are the changes needed? Since Apache Spark 3.1.0, `spark.kubernetes.executor.checkAllContainers` is supported and useful because [sidecar pattern](https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/) is used in many cases. Also, it prevents user mistakes which forget and ignore the sidecars' failures by always reporting sidecar failures via executor status. - #29924 ### Does this PR introduce _any_ user-facing change? - This configuration is no-op when there is no other container. - This will report user containers' error correctly when there exist other containers which are provided by the users. ### How was this patch tested? Both `true` and `false` are covered by our CI test coverage since Apache Spark 3.1.0. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #47337 from dongjoon-hyun/SPARK-48887. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
Handle executor failure with multiple containers
Added a spark property spark.kubernetes.executor.checkAllContainers,
with default being false. When it's true, the executor snapshot will
take all containers in the executor into consideration when deciding
whether the executor is in "Running" state, if the pod restart policy is
"Never". Also, added the new spark property to the doc.
What changes were proposed in this pull request?
Checking of all containers in the executor pod when reporting executor status, if the
spark.kubernetes.executor.checkAllContainers
property is set to true.Why are the changes needed?
Currently, a pod remains "running" as long as there is at least one running container. This prevents Spark from noticing when a container has failed in an executor pod with multiple containers. With this change, user can configure the behavior to be different. Namely, if any container in the executor pod has failed, either the executor process or one of its sidecars, the pod is considered to be failed, and it will be rescheduled.
Does this PR introduce any user-facing change?
Yes, new spark property added.
User is now able to choose whether to turn on this feature using the
spark.kubernetes.executor.checkAllContainers
property.How was this patch tested?
Unit test was added and all passed.
I tried to run integration test by following the instruction here (section "Testing K8S") and also here, but I wasn't able to run it smoothly as it fails to talk with minikube cluster. Maybe it's because my minikube version is too new (I'm using v1.13.1)...? Since I've been trying it for two days and still can't make it work, I decided to submit this PR and hopefully the Jenkins test will pass.