Switch branches/tags
Nothing to show
Find file Copy path
f77a439 Oct 16, 2017
2 contributors

Users who have contributed to this file

@arun-gupta @emmanuelCarre
441 lines (360 sloc) 16.3 KB

Docker for AWS

Docker for AWS is a CloudFormation template that configures Docker in swarm-mode, running on EC2 instances backed by custom AMIs. This allows to create a cluster of Docker Engine in swarm-mode with a single click. This workshop will take the multi-container application and deploy it on multiple hosts.


What is required for creating this CloudFormation template?

  1. Permissions

  2. SSH key in AWS in the region where you want to deploy (required to access the completed Docker install). Amazon EC2 Key Pair explains how to add SSH key to your account

  3. AWS account that support EC2-VPC

Create cluster

Launch Stack to create the CloudFormation template.

docker aws 1
Figure 1. Select template

Click on Next

docker aws 2
Figure 2. Swarm size

Select the number of Swarm manager (3) and worker (5) nodes. This wll create a 8 node cluster. Each node will initialize a new EC2 instance. Feel free to alter the number of master and worker nodes. For example, a more reasonable number for testing may be 1 master and 3 worker nodes.

Select the SSH key that will be used to access the cluster.

By default, the template is configured to redirect all log statements to CloudWatch. Until #30691 is fixed, the logs will only be available using CloudWatch. Alternatively, you may select to not redirect logs to CloudWatch. In this case, the usual command to get the logs will work.

Scroll down to select manager and worker properties.

docker aws 3
Figure 3. Swarm manager/worker properties

m4.large (2 vCPU and 8 GB memory) is a good start for manager. m4.xlarge (4 vCPU and 16 GB memory) is a good start for worker node. Feel free to choose m3.medium (1 vCPU and 3.75 GB memory) for manager and m3.large (2 vCPU and 7.5 GB memory) for a smaller cluster. Make sure the EC2 instance size is chosen to accommodate the processing and memory needs of containers that will run there.

Click on Next

docker aws 4
Figure 4. Swarm options

Take default for all the options and click on Next.

docker aws 5
Figure 5. Swarm review
docker aws 6
Figure 6. Swarm IAM accept

Accept the checkbox for CloudFormation to create IAM resources. Click on Create to create the Swarm cluster.

It will take a few minutes for the CloudFormation template to complete. The output will look like:

docker aws 7
Figure 7. Swarm CloudFormation complete

EC2 Console will show the EC2 instances for manager and worker.

docker aws 8
Figure 8. EC2 console

Select one of the manager nodes, copy the public IP address:

docker aws 9
Figure 9. Swarm manager

Create a SSH tunnel using the command:

ssh -i ~/.ssh/arun-us-east1.pem -o StrictHostKeyChecking=no -NL localhost:2374:/var/run/docker.sock &

Get more details about the cluster using the command docker -H localhost:2374 info. This shows the output:

Containers: 5
 Running: 4
 Paused: 0
 Stopped: 1
Images: 5
Server Version: 17.09.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: awslogs
Cgroup Driver: cgroupfs
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: rb6rju2eln0bn80z7lqocjkuy
 Is Manager: true
 ClusterID: t38bbbex5w3bpfmnogalxn5k1
 Managers: 3
 Nodes: 8
  Task History Retention Limit: 5
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address:
 Manager Addresses:
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
  Profile: default
Kernel Version: 4.9.49-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.785GiB
Name: ip-172-31-46-94.ec2.internal
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 299
 Goroutines: 399
 System Time: 2017-10-07T01:04:00.971903882Z
 EventsListeners: 0
Experimental: true
Insecure Registries:
Live Restore Enabled: false

List of nodes in the cluster can be seen using docker -H localhost:2374 node ls:

ID                            HOSTNAME                        STATUS              AVAILABILITY        MANAGER STATUS
xdhwdiglfs5wsvkcl0j65wl04     ip-172-31-4-89.ec2.internal     Ready               Active
xbrejk2g7mk9v15hg9xzu3syq     ip-172-31-8-136.ec2.internal    Ready               Active              Leader
bhwc67r78cfqtquri82qdwtnk     ip-172-31-13-38.ec2.internal    Ready               Active
ygxdfloly3x203x9p5wbpk34d     ip-172-31-17-74.ec2.internal    Ready               Active
toyfec889wuqdix6z618mlj85     ip-172-31-26-163.ec2.internal   Ready               Active              Reachable
37lzvgrtlnnq0lnr3cip0fwhw     ip-172-31-28-204.ec2.internal   Ready               Active
k2aprr08b3q28nvze9uv26821     ip-172-31-39-252.ec2.internal   Ready               Active
rb6rju2eln0bn80z7lqocjkuy *   ip-172-31-46-94.ec2.internal    Ready               Active              Reachable

Multi-container application to multi-host

Use the Compose file to deploy a multi-container application to this Docker cluster. This will deploy a multi-container application to multiple hosts.

Create a new directory and cd to it:

mkdir webapp
cd webapp

Create a new Compose definition docker-compose.yml using the configuration file from

The command is:

docker -H localhost:2374 stack deploy --compose-file=docker-compose.yml webapp

The output is:

Ignoring deprecated options:

container_name: Setting the container name is not supported.

Creating network webapp_default
Creating service webapp_web
Creating service webapp_db

WildFly Swarm and MySQL services are started on this cluster. Each service has a single container. A new overlay network is created. This allows multiple containers on different hosts to communicate with each other.

Verify service and containers in application

Verify that the WildFly and Couchbase services are running using docker -H localhost:2374 service ls:

ID                  NAME                MODE                REPLICAS            IMAGE                                   PORTS
q4d578ime45e        webapp_db           replicated          1/1                 mysql:8                                 *:3306->3306/tcp
qt5qrzp1jpyq        webapp_web          replicated          1/1                 arungupta/docker-javaee:dockerconeu17   *:8080->8080/tcp,*:9990->9990/tcp

REPLICAS colum shows that one of one replica for the container is running for each service. It might take a few minutes for the service to be running as the image needs to be downloaded on the host where the container is started.

Let’s find out which node the services are running. Do this for the web application first:

docker -H localhost:2374 service ps webapp_web
ID                  NAME                IMAGE                                   NODE                            DESIRED STATE       CURRENT STATE         ERROR               PORTS
npmunk4ll9f4        webapp_web.1        arungupta/docker-javaee:dockerconeu17   ip-172-31-39-252.ec2.internal   Running             Running 2 hours ago

The NODE column shows the internal IP address of the node where this service is running.

Now, do this for the database:

docker -H localhost:2374 service ps webapp_db
ID                  NAME                IMAGE               NODE                           DESIRED STATE       CURRENT STATE         ERROR               PORTS
vzaji4xdi2qh        webapp_db.1         mysql:8             ip-172-31-17-74.ec2.internal   Running             Running 2 hours ago

The NODE column for this service shows that the service is running on a different node.

More details about the service can be obtained using docker -H localhost:2374 service inspect webapp_web:

        "ID": "qt5qrzp1jpyq1ur7qhg55ijf1",
        "Version": {
            "Index": 58
        "CreatedAt": "2017-10-07T01:09:32.519975146Z",
        "UpdatedAt": "2017-10-07T01:09:32.535587602Z",
        "Spec": {
            "Name": "webapp_web",
            "Labels": {
                "com.docker.stack.image": "arungupta/docker-javaee:dockerconeu17",
                "com.docker.stack.namespace": "webapp"
            "TaskTemplate": {
                "ContainerSpec": {
                    "Image": "arungupta/docker-javaee:dockerconeu17@sha256:6a403c35d2ab4442f029849207068eadd8180c67e2166478bc3294adbf578251",
                    "Labels": {
                        "com.docker.stack.namespace": "webapp"
                    "Privileges": {
                        "CredentialSpec": null,
                        "SELinuxContext": null
                    "StopGracePeriod": 10000000000,
                    "DNSConfig": {}
                "Resources": {},
                "RestartPolicy": {
                    "Condition": "any",
                    "Delay": 5000000000,
                    "MaxAttempts": 0
                "Placement": {
                    "Platforms": [
                            "Architecture": "amd64",
                            "OS": "linux"
                "Networks": [
                        "Target": "b0ig9m1qsjax95tp9m1i2m4yo",
                        "Aliases": [
                "ForceUpdate": 0,
                "Runtime": "container"
            "Mode": {
                "Replicated": {
                    "Replicas": 1
            "UpdateConfig": {
                "Parallelism": 1,
                "FailureAction": "pause",
                "Monitor": 5000000000,
                "MaxFailureRatio": 0,
                "Order": "stop-first"
            "RollbackConfig": {
                "Parallelism": 1,
                "FailureAction": "pause",
                "Monitor": 5000000000,
                "MaxFailureRatio": 0,
                "Order": "stop-first"
            "EndpointSpec": {
                "Mode": "vip",
                "Ports": [
                        "Protocol": "tcp",
                        "TargetPort": 8080,
                        "PublishedPort": 8080,
                        "PublishMode": "ingress"
                        "Protocol": "tcp",
                        "TargetPort": 9990,
                        "PublishedPort": 9990,
                        "PublishMode": "ingress"
        "Endpoint": {
            "Spec": {
                "Mode": "vip",
                "Ports": [
                        "Protocol": "tcp",
                        "TargetPort": 8080,
                        "PublishedPort": 8080,
                        "PublishMode": "ingress"
                        "Protocol": "tcp",
                        "TargetPort": 9990,
                        "PublishedPort": 9990,
                        "PublishMode": "ingress"
            "Ports": [
                    "Protocol": "tcp",
                    "TargetPort": 8080,
                    "PublishedPort": 8080,
                    "PublishMode": "ingress"
                    "Protocol": "tcp",
                    "TargetPort": 9990,
                    "PublishedPort": 9990,
                    "PublishMode": "ingress"
            "VirtualIPs": [
                    "NetworkID": "i41xh4kmuwl5vc47h536l3mxs",
                    "Addr": ""
                    "NetworkID": "b0ig9m1qsjax95tp9m1i2m4yo",
                    "Addr": ""

Logs for the service are redirected to CloudWatch and thus cannot be seen using docker service logs. This will be fixed with #30691. Let’s view the logs using using CloudWatch Logs.

docker aws 10
Figure 10. CloudWatch log group

Select the log group:

docker aws 11
Figure 11. CloudWatch log stream

Pick log stream to see the log statements from WildFly Swarm:

docker aws 12
Figure 12. CloudWatch application log stream

Access application

Application is accessed using manager’s IP address and on port 8080. By default, the port 8080 is not open.

In EC2 Console, select an EC2 instance with name Docker-Manager, click on Docker-Managerxxx in Security groups. Click on Inbound, Edit, Add Rule, and create a rule to enable TCP traffic on port 8080.

docker aws 13
Figure 13. Open port 8080 in Docker manager

Click on Save to save the rules.

Now, the application is accessible using the command curl -v and shows output:

*   Trying
* Connected to ( port 8080 (#0)
> GET /resources/employees HTTP/1.1
> Host:
> User-Agent: curl/7.51.0
> Accept: */*
< HTTP/1.1 200 OK
< Connection: keep-alive
< Content-Type: application/xml
< Content-Length: 478
< Date: Sat, 07 Oct 2017 02:53:11 GMT
* Curl_http_done: called premature == 0
* Connection #0 to host left intact
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>

Shutdown application

Shutdown the application using the command docker -H localhost:2374 stack rm webapp:

Removing service webapp_db
Removing service webapp_web
Removing network webapp_default

This stops the container in each service and removes the services. It also deletes any networks that were created as part of this application.

Shutdown cluster

Docker cluster can be shutdown by deleting the stack created by CloudFormation:

docker aws 14
Figure 14. Delete CloudFormation template