New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move away from the webhook for driver/executor pod configurations #1176
Comments
@liyinan926 - Do you have an example somewhere how to use podtemplate in sparkapplication. would be really cool. thanks |
Hi, I started working on this issue, and came up with a minimal working implementation in #1296 My approach is to put a https://spark.apache.org/docs/latest/running-on-kubernetes.html#pod-template I didn't include the regenerated CRDs in the PR. Further items to decide before moving forward:
|
@liyinan926 could you take a look? |
For anyone else who stumbles upon this issue, here is how I was able to do it:
sparkConf:
spark.kubernetes.driver.podTemplateFile: "/etc/templates/pod_template.yaml" |
Hello 👋 |
Hello |
@elihschiff does the spark-operator helm chart have any support for mounting a configmap to the operator pod? I don't see anything in the values or documentation. I definitely don't want to have to be manually editing the deployment in k8s. |
Sorry this was 2 years ago, I don't remember exactly what I did. But I wouldn't be surprised if I had modified the helm chart to get it working |
Given a number of occurrences of issues with the webhook that stops working after some time due to certificate issues, I'm thinking that the right direction in the long term is to move away from it. For anyone who's already on Spark 3.0, the pod template support for driver/executor pods may be the right way to go. The operator should be able to translate driver and executor configs in
SparkApplication
s into driver and executor pod templates and use the templates when submitting applications. Creating this issue to track the work.The text was updated successfully, but these errors were encountered: