diff --git a/README.md b/README.md index eb26379..7ea1927 100644 --- a/README.md +++ b/README.md @@ -65,15 +65,29 @@ Docker containers provide a portable and repeatable method for deploying the clu ## Usage ### Option 1. Mesos-mastered Spark Jobs -0. Install Mesos with Docker Containerizer: Install a Mesos cluster configured to use the Docker containerizer, which enables the Mesos slaves to execute Spark tasks within a Docker container. The following script uses the Python library [Fabric](http://www.fabfile.org/) to install and configure a cluster according to [How To Configure a Production-Ready Mesosphere Cluster on Ubuntu 14.04](https://www.digitalocean.com/community/tutorials/how-to-configure-a-production-ready-mesosphere-cluster-on-ubuntu-14-04): - - Update IP Addresses of Mesos nodes in ```mesos/fabfile.py``` +1. Install Mesos with Docker Containerizer and Docker Images: Install a Mesos cluster configured to use the Docker containerizer, which enables the Mesos slaves to execute Spark tasks within a Docker container. + + A. End-to-end Installation: The script ```mesos/1-setup-mesos-cluster.sh``` uses the Python library [Fabric](http://www.fabfile.org/) to install and configure a cluster according to [How To Configure a Production-Ready Mesosphere Cluster on Ubuntu 14.04](https://www.digitalocean.com/community/tutorials/how-to-configure-a-production-ready-mesosphere-cluster-on-ubuntu-14-04). After installation, it also pulls the Docker images that will execute Spark tasks. To use: + - Update IP Addresses of Mesos nodes in ```mesos/fabfile.py```. Find instances to change with:
grep 'ip-address' mesos/fabfile.py
- - Install/configure the cluster
- mesos/1-setup-mesos-cluster.sh
+ - Install/configure the cluster:
+ ./mesos/1-setup-mesos-cluster.sh
Optional: ```./1-build.sh``` if you prefer instead to build the docker images from scratch (rather than the script pulling from Docker Hub)
-1. Run the client container on a client host (replace 'username-for-sparkjobs' and 'mesos-master-fqdn' below): ./5-run-spark-mesos-dockerworker-ipython.sh username-for-sparkjobs mesos://mesos-master-fqdn:5050
+ B. Manual Installation: Follow the general steps in ```mesos/1-setup-mesos-cluster.sh``` to manually install:
+
+ - Install mesosphere on masters
+ - Install mesos on slaves
+ - Configure zookeeper on all nodes
+ - Configure and start masters
+ - Configure and start slaves
+ - Load docker images:
+ docker pull lab41/spark-mesos-dockerworker-ipython
+docker pull lab41/spark-mesos-mesosworker-ipython
+
+
+2. Run the client container on a client host (replace 'username-for-sparkjobs' and 'mesos-master-fqdn' below): ./5-run-spark-mesos-dockerworker-ipython.sh username-for-sparkjobs mesos://mesos-master-fqdn:5050
*Note: the client container will create username-for-sparkjobs when started, providing the ability to submit Spark jobs as a specific user and/or deploy different IPython servers for different users.
diff --git a/mesos/1-setup-mesos-cluster.sh b/mesos/1-setup-mesos-cluster.sh
index ed954e9..d5a755f 100644
--- a/mesos/1-setup-mesos-cluster.sh
+++ b/mesos/1-setup-mesos-cluster.sh
@@ -24,3 +24,8 @@ fab --parallel \
fab --parallel \
--roles=slaves \
configure_and_start_slaves
+
+# load docker images
+fab --parallel \
+ --roles=slaves \
+ pull_docker_images
diff --git a/mesos/fabfile.py b/mesos/fabfile.py
index cc0d706..1a57e6b 100644
--- a/mesos/fabfile.py
+++ b/mesos/fabfile.py
@@ -102,6 +102,8 @@ def configure_and_start_masters():
def configure_and_start_slaves():
execute(start_slaves)
+
+def pull_docker_images():
execute(docker_pull_containers)
def docker_restart():