This is a test project to get used to nomad and consul with a podman driver. After successful startup, one nomad server and three nomad clients will be up and running.
./prepare.sh
podman-compose up -d
The services can then be reached as follows:
- Nomad: http://localhost:4646
- Consul: http://localhost:8500
Hints:
- Since the configured default token in Consul only allows limited
actions, it is recommended to login to its web frontend with the
token created by
prepare.sh
atshared/consul-agents.token
. - Although Nomad is deployed with ACLs enabled, it is deployed with a permissive anonymous policy. This would need to be changed for production environments.
Example taken from https://learn.hashicorp.com/tutorials/nomad/load-balancing-traefik
Note: In this example, a webapp running on an arbitrary number of nodes is exposed to Traefik instances running on all nomad client nodes. In a real-world scenario, an external load balancer would then forward network traffic to Traefik. However, since we are only using containers, an HAProxy process running on the nomad server plays the role of such an external load balancer. Therefore, one can reach two endpoints on the host machine after executing the below commands:
- http://localhost:8080/myapp (demo webapp)
- http://localhost:8081/ (Traefik dashboard)
podman exec -ti nomad-podman_client_1 /bin/bash
> nomad job run /examples/traefik.nomad
> nomad job run /examples/demo-webapp.nomad
Example taken from https://www.hashicorp.com/blog/consul-connect-native-tasks-in-hashicorp-nomad-0-12
podman exec -ti nomad-podman_client_1 /bin/bash
> consul config write /examples/connect/intention-config.hcl
> nomad job run /examples/connect/native.nomad
> # interact with webservice via curl
> nomad status connect-native
> nomad alloc status <allocation id of frontend as returned above>
> curl http://<ip:port for http port allocation as returned above>