Ansible playbook to create a Nomad cluster with Consul, Vault, Traefik using LXD and https://nip.io/
The cluster contains the following nodes:
- 3 Consul nodes
- 3 Vault nodes
- 3 Nomad server nodes
- 5 Nomad client nodes (3 "apps" node, 2 "infra" node)
- 1 NFS server node
- 1 Load balancer node
Consul is used to bootstrap the Nomad cluster, for service discovery and for the service mesh.
The Nomad client infra nodes are the entrypoints of the cluster. They will run Traefik and use Consul service catalog to expose the applications.
Load balancer node will map ports 80 and 443 into the host, which will also have the ip
10.99.0.1
, that is part of the cluster.
The proxy configuration exposes the services at {{ service name }}.apps.10.99.0.1.nip.io
,
so when you deploy the service hello.nomad, it will be exposed at
hello-world.apps.10.99.0.1.nip.io
Consul, Vault and Nomad ui can be accessed in https://consul.10.99.0.1.nip.io
,
https://vault.10.99.0.1.nip.io
and https://nomad.10.99.0.1.nip.io
, respectivelly.
Root tokens can be found in the .tmp
directory.
For storage with the NFS node, a CSI plugin will be configured using the RocketDuck CSI plugin.
The are also examples of other CSI plugins.
There are 3 example jobs:
- hello.nomad, a simples hello world
- countdash.nomad, shows the usage of consul connect
- nfs, show how to setup volumes using the nfs csi plugin