An example using Puppet Enterprise to drive Consul and Docker. And of using Consul to drive Puppet. Specifically this example:
- Sets up a 4 node Puppet Enterprise cluster
- Installs Docker Swarm on the agent nodes
- Installs a Consul master and a Swarm manager on the
firstnode - Installs Docker and runs several application containers on the
secondandthirdnodes - Runs an nginx proxy on the
firstnode that load balances between the application containers - If you add or remove application containers from consul they should be
automatically added or removed from the proxy. This is done via a
consul watcher triggering a Puppet run on
firstwhenever the service list changes
This was put together to act as material for a talk at Loadays entitled Service discovery and configuration management.
You'll need to tell Oscar about Puppet Enterprise. First download the tar and run:
vagrant pe-build copy puppet-enterprise-3.7.2-ubuntu-14.04-amd64.tar.gz
First lets launch the PE master:
vagrant up master --provider virtualbox
You'll want to provide a classification to bootstrap the other nodes. Create the following two node groups from the Classification tab:
Rules: name is first
Matching nodes: first
Classes: roles::master
Rules: name is not master
name is not first
Matching nodes: second
third
Classes: roles::app
You should then be able to launch the 3 remaining virtual machines using:
vagrant up --provider virtualbox
You should be able to access the Consul dashboard at 10.20.2.2:8500 and the load balanced containers at 10.20.2.2. You should be able to get to something like the following:
This is partly an experiment in the use of service discovery tools (in this case Consul) alongside Puppet. One of the obvious benefits is cross-node communication made possible when things change.
The interesting bits of the code are in the profile classes, in particular in modules/profiles/manifests which contains this defined type which brigs together Docker and Consul.
define application(
$port,
) {
::docker::run { $title:
image => 'nginx',
ports => ["${port}:80"],
volumes => "/var/www/${title}:/usr/share/nginx/html:ro",
}
file { "/var/www/${title}":
ensure => directory,
}
file { "/var/www/${title}/index.html":
ensure => present,
content => "${title} running on ${::hostname}",
}
::consul::service { "application-${title}":
service_name => 'application',
port => $port,
}
}And in modules/profiles/manifests/webserver.pp which watches for changes in the above registered services and runs Puppet.
::consul::watch { 'detect_backend_changes':
type => 'service',
handler => '/usr/bin/runpuppet.sh',
service => 'application',
passingonly => true,
require => File['/usr/bin/runpuppet.sh'],
}Important parts of the puzzle are built using:
- Oscar to create a Puppet Enterprise sandbox.
- garethr-docker to install Docker and run the containers
- KyleAnderson-consul which installs Consul and allows for registering services and watchers
- lynxman-hiera_consul which acts as a Hiera backend for Consul
Included for reference if you're experimenting with Puppet Enterprise as well.
You should be able to access the main dashboard
using admin and puppetlabs as the username and password. It's
possible the address will be different, at which point 'vagrant ssh
master' and ifconfig should let you find the correct IP.
The inventory service is running on port 8140. Locally you can access
that with:
curl -k https://localhost:18140/production/facts/master
Note that this currently needs an update to the auth.conf file
at /etc/puppetlabs/puppet/auth.conf to add:
path /facts
auth any
method find, search
allow *
More details about auth.conf and about the inventory service
PuppetDB is running an exposed on all interfaces. Access the dashboard or try out the API:
curl http://localhost:18080/v2/nodes

