Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Include run-spark-pi-local.sh to demonstrate questions
DO NOT RUN THIS ON A PRODUCTION SYSTEM For example use docker.app or minikube with a clean local k8s server Add examples/run-spark-pi-local.sh which will run the spark-pi example for a local k8s cluster from a novice user. This script adds helm, the spark-operator, runs spark-pi, displays the status, then tears down spark-operator and helm. Ideally there should not be any calls to sleep in a proper script that uses primitives to synchronize. It was created to ask the questions: - How can you launch a spark application and then reliably wait for it to finish? - This needs to be race free. - For a restart=Never application, what are the application states that indicate completion, just COMPLETED or FAILED? - Is there documentation about application states? - How can you know whether the application succeeded or failed? - Does COMPLETED imply success as the driver pod exit code should? - Why does the SparkPi example show all executor state as FAILED? - I've heard that this happens if sys.exit(0) is not called which supposedly should be avoided. Why doesn't spark.stop() cause executors to exit cleanly?
- Loading branch information