New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not able to launch a spark operator using standard docker #27
Comments
It needs to run on a cluster with alpha features enabled to be able to use the initializer. See https://kubernetes.io/docs/admin/extensible-admission-controllers/#enable-initializers-alpha-feature. |
Li, appreciate the quick response. Any other way we can run it if alpha feature is not turned on ? |
I can add a flag to allow disabling the initializer so you can run on a non-alpha cluster. |
perfect thanks. Appreciate it. |
Ref: #28. |
Thanks for the quick turn-around. One more question, how can we run an application when the application jars are not packaged into the driver and/or executor. eg : with the spark-k8s project I can specify, In submission.go I can see a note to the effect that "// Note that when the controller submits the application, it expects that all dependencies are local" What is the recommended approach here? eg : resource staging server, persistent volume claim with jars on it.(If yes then, how would this be mounted at runtime?) Appreciate the help! |
What we often do is to stage jars/files to, e.g., a GCS bucket, (optionally) make them public, and list the https urls in the |
BTW: I actually plan to add a command-line tool, say, |
Ref: #15 tracks work for supporting non-container-local dependencies. |
@devtagare Can this be closed now? |
Yes. Thanks for the quick turnaround. |
…us-enhancemenet change default status update policy
Here's the full stack trace,
I0117 19:27:35.500409 1 main.go:71] Checking the kube-dns add-on
I0117 19:27:35.523940 1 main.go:76] Starting the Spark operator
I0117 19:27:35.524650 1 controller.go:136] Starting the SparkApplication controller
I0117 19:27:35.524664 1 controller.go:138] Creating the CustomResourceDefinition sparkapplications.sparkoperator.k8s.io
W0117 19:27:35.539429 1 crd.go:69] CustomResourceDefinition sparkapplications.sparkoperator.k8s.io already exists
I0117 19:27:35.539489 1 controller.go:144] Starting the SparkApplication informer
I0117 19:27:35.639770 1 controller.go:151] Starting the workers of the SparkApplication controller
I0117 19:27:35.640005 1 spark_pod_monitor.go:109] Starting the Spark Pod monitor
I0117 19:27:35.640012 1 spark_pod_monitor.go:112] Starting the Pod informer of the Spark Pod monitor
I0117 19:27:35.640065 1 initializer.go:112] Starting the Spark Pod initializer
I0117 19:27:35.640073 1 initializer.go:164] Adding the InitializerConfiguration spark-pod-initializer-config
I0117 19:27:35.640194 1 submission_runner.go:58] Starting the spark-submit runner
F0117 19:27:35.645496 1 main.go:96] failed to create InitializerConfiguration spark-pod-initializer-config: the server could not find the requested resource (post initializerconfigurations.admissionregistration.k8s.io)
The text was updated successfully, but these errors were encountered: