Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to launch a spark operator using standard docker #27

Closed
devtagare opened this issue Jan 17, 2018 · 11 comments
Closed

Not able to launch a spark operator using standard docker #27

devtagare opened this issue Jan 17, 2018 · 11 comments

Comments

@devtagare
Copy link

Here's the full stack trace,

I0117 19:27:35.500409 1 main.go:71] Checking the kube-dns add-on
I0117 19:27:35.523940 1 main.go:76] Starting the Spark operator
I0117 19:27:35.524650 1 controller.go:136] Starting the SparkApplication controller
I0117 19:27:35.524664 1 controller.go:138] Creating the CustomResourceDefinition sparkapplications.sparkoperator.k8s.io
W0117 19:27:35.539429 1 crd.go:69] CustomResourceDefinition sparkapplications.sparkoperator.k8s.io already exists
I0117 19:27:35.539489 1 controller.go:144] Starting the SparkApplication informer
I0117 19:27:35.639770 1 controller.go:151] Starting the workers of the SparkApplication controller
I0117 19:27:35.640005 1 spark_pod_monitor.go:109] Starting the Spark Pod monitor
I0117 19:27:35.640012 1 spark_pod_monitor.go:112] Starting the Pod informer of the Spark Pod monitor
I0117 19:27:35.640065 1 initializer.go:112] Starting the Spark Pod initializer
I0117 19:27:35.640073 1 initializer.go:164] Adding the InitializerConfiguration spark-pod-initializer-config
I0117 19:27:35.640194 1 submission_runner.go:58] Starting the spark-submit runner
F0117 19:27:35.645496 1 main.go:96] failed to create InitializerConfiguration spark-pod-initializer-config: the server could not find the requested resource (post initializerconfigurations.admissionregistration.k8s.io)

@liyinan926
Copy link
Collaborator

It needs to run on a cluster with alpha features enabled to be able to use the initializer. See https://kubernetes.io/docs/admin/extensible-admission-controllers/#enable-initializers-alpha-feature.

@devtagare
Copy link
Author

Li, appreciate the quick response. Any other way we can run it if alpha feature is not turned on ?

@liyinan926
Copy link
Collaborator

I can add a flag to allow disabling the initializer so you can run on a non-alpha cluster.

@devtagare
Copy link
Author

perfect thanks. Appreciate it.

@liyinan926
Copy link
Collaborator

Ref: #28. enable-initializer is true by default, you can set it to false in spark-operator.yaml.

@devtagare
Copy link
Author

Thanks for the quick turn-around.

One more question, how can we run an application when the application jars are not packaged into the driver and/or executor.

eg : with the spark-k8s project I can specify,
init-container:
image: "path to my init container image"

In submission.go I can see a note to the effect that "// Note that when the controller submits the application, it expects that all dependencies are local"

What is the recommended approach here? eg : resource staging server, persistent volume claim with jars on it.(If yes then, how would this be mounted at runtime?)

Appreciate the help!

@liyinan926
Copy link
Collaborator

What we often do is to stage jars/files to, e.g., a GCS bucket, (optionally) make them public, and list the https urls in the deps.jars and deps.files sections. If you don't want to make the dependencies on GCS public, you will need a custom init-container image for downloading from GCS. See https://gist.github.com/liyinan926/f9e81f7b54d94c05171a663345eb58bf for an example init-container image.

@liyinan926
Copy link
Collaborator

BTW: I actually plan to add a command-line tool, say, sparkctl that supports staging local dependencies to, for example, the resource staging server.

@liyinan926
Copy link
Collaborator

Ref: #15 tracks work for supporting non-container-local dependencies.

@liyinan926
Copy link
Collaborator

@devtagare Can this be closed now?

@devtagare
Copy link
Author

Yes. Thanks for the quick turnaround.

ringtail added a commit to ringtail/spark-on-k8s-operator that referenced this issue Aug 12, 2021
…us-enhancemenet

change default status update policy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants