Docker images to:
- Setup a standalone Apache Spark cluster running one Spark Master and multiple Spark workers
- Build Spark applications in Java, Scala or Python to run on a Spark cluster
Currently supported versions:
- Spark 1.5.1 for Hadoop 2.6 and later
- Spark 1.6.2 for Hadoop 2.6 and later
- Spark 2.0.0 for Hadoop 2.7+ with Hive support and OpenJDK 7
- Spark 2.0.0 for Hadoop 2.7+ with Hive support and OpenJDK 8
- Spark 2.0.1 for Hadoop 2.7+ with OpenJDK 8
- Spark 2.0.2 for Hadoop 2.7+ with OpenJDK 8
- Spark 2.1.0 for Hadoop 2.7+ with OpenJDK 8
Using Docker Compose
Add the following services to your
docker-compose.yml to integrate a Spark master and Spark worker in your BDE pipeline:
master: image: bde2020/spark-master:1.6.2-hadoop2.6 hostname: spark-master environment: INIT_DAEMON_STEP: setup_spark worker: image: bde2020/spark-worker:1.6.2-hadoop2.6 links: - "master:spark-master"
Make sure to fill in the
INIT_DAEMON_STEP as configured in your pipeline.
Running Docker containers without the init daemon
To start a Spark master:
docker run --name spark-master -h spark-master -e ENABLE_INIT_DAEMON=false -d bde2020/spark-master:1.6.2-hadoop2.6
To start a Spark worker:
docker run --name spark-worker-1 --link spark-master:spark-master -e ENABLE_INIT_DAEMON=false -d bde2020/spark-worker:1.6.2-hadoop2.6
Launch a Spark application
Building and running your Spark application on top of the Spark cluster is as simple as extending a template Docker image. Check the template's README for further documentation.