Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Job instead of deployment for ephemeral Spark Clusters #238

Open
4n4nd opened this issue Jul 10, 2018 · 2 comments
Open

Job instead of deployment for ephemeral Spark Clusters #238

4n4nd opened this issue Jul 10, 2018 · 2 comments

Comments

@4n4nd
Copy link

4n4nd commented Jul 10, 2018

I think instead of a deployment on openshift, a job would be a better type for ephemeral spark clusters as after the application is finished, it would terminate and not occupy resources and also it would be easier to schedule them as cronjobs.
Currently if I create a job, it does not create an ephemeral spark cluster, instead it creates a shared cluster which is not deleted when the job is finished.

@elmiko
Copy link
Contributor

elmiko commented Jul 10, 2018

i think this is a good request, although we probably will need to support both styles of application: Deployment and Job.

i wonder if there is some way we can get the tooling to pick up either a Deployment or Job for the ephemeral cluster management?

also, i imagine this will get much easier to manage depending on how the CRD-based implementations work out.

@4n4nd
Copy link
Author

4n4nd commented Jul 10, 2018

A simple work-around for this: https://github.com/4n4nd/oc_train_pipeline/blob/master/delete_spark.sh as suggested by elmiko.
Just run this script after you are done with your spark cluster (I run it after sc.stop()) and it will forcefully delete it, so name your cluster carefully if you're naming it manually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants