-
Notifications
You must be signed in to change notification settings - Fork 28.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-44050][K8S]add retry config when creating Kubernetes resources. #45911
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for making a PR, but I'm not sure if this is a right layer to do. For me, it sounds like you are hitting your K8s cluster issue or K8s client library issue. Could you elaborate your environment and the error message more, @liangyouze ?
When creating Kubernetes resources, we occasionally encounter situations where resources such as ConfigMap cannot be successfully created, resulting in the driver pod remaining in the 'ContainerCreating' state. Therefore, it is necessary to add a verification mechanism after creating other resources to ensure that the resources are actually created
It's the same as described in SPARK-44050,I've encountered the same issue. When creating resources such as configmaps, occasionally this situation occurs: the code does not throw any exceptions, but the configmap resource is not actually created, causing the driver pod to remain in a ContainerCreating state and unable to proceed to the next step. This may be a Kubernetes issue, or a feature (as far as I know, Kubernetes has some rate-limiting policies that may cause certain requests to be dropped, but I'm not sure if it's related), but in any case, Spark should not be stuck because of this. |
It seems you hit some bug of K8s or the using issue. |
same problem when using spark operator, it's weird why the code does not throw anything when configmap is not created |
Hi @liangyouze, I met the same issuse, I like to know if the problem occurs in using spark operator or spark-submit directly, is there anything in consle output? |
When using spark-submit, there is no error output in the console, and the client will show that the driver pod is always in the ContainerCreating state and will never end. |
Maybe I met another issuse, I use spark-submit in spark operator pod with k8s mode, but sometimes driver pod keeps getting stuck in ContainerCreating state due to missing ConfigMap and the console output shows 'Killed'. I added some log in KubernetesClientApplication.scala like this:
and the log did not show |
"It seems that the reason 'after other resource' is not printed out is due to the client being OOM and exiting. Have you tried increasing the memory of the client pod to a sufficient size, so as to prevent the OOMKilled phenomenon?" |
We met the same phenomenon that configmap was not created, but maybe with different reason |
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
What changes were proposed in this pull request?
add retry config when creating Kubernetes resources.
Why are the changes needed?
When creating Kubernetes resources, we occasionally encounter situations where resources such as ConfigMap cannot be successfully created, resulting in the driver pod remaining in the 'ContainerCreating' state. Therefore, it is necessary to add a verification mechanism after creating other resources to ensure that the resources are actually created
Does this PR introduce any user-facing change?
No
How was this patch tested?
add new tests
Was this patch authored or co-authored using generative AI tooling?
No