Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RE-344 HA Data cluster #29

Merged
merged 1 commit into from
Oct 25, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
7 changes: 7 additions & 0 deletions terraform/examples/multi-node-cluster/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Multi Node HA Cluster Example

This directory shows an example Terraform configuration that uses all of the [modules](../../modules) to deploy NS1 in a more complex, multi-node pattern with 3 HA Data containers. This topology provides the foundation of a scalable, hub and spoke topology as well as high availability of the data nodes. The HA Data nodes are expected to be on 3 separate hosts, labeled `data01`, `data02` and `data03`. The "hub" is refered to in this configuration as the `control01` node and the "spoke" is refered to as the `edge01` node.

In this example, the Docker images have already been loaded on the `control` node's Docker daemon, while the `edge` node will be download the images from a Docker registry. *Note* that in this example Docker Hub is used as the registry, but this cannot be used in production as the images are not publically available on Docker Hub.

This example will also create dedicated `ns1` Docker networks on all of the `data` Docker hosts as well as the `control` and `edge` Docker hosts.
262 changes: 262 additions & 0 deletions terraform/examples/multi-node-cluster/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,262 @@
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY NS1 eDDI CLUSTER ON MULTIPLE DOCKER HOSTS
# This configuration deploys NS1 containers on two hosts. The Data cluster is deployed on 3 separate hosts.
# The Core and XFR containers are deployed together on the "control" host, while the DHCP, DNS and Dist containers
# are deployed on the "edge" host. In a "hub and spoke" pattern, "control" can be thought of as the hub, while "edge"
# would be a sigle spoke.
# ---------------------------------------------------------------------------------------------------------------------

# ----------------------------------------------------------------------------------------------------------------------
# REQUIRE A SPECIFIC TERRAFORM VERSION OR HIGHER
# This module has been updated with 0.12 syntax, which means it is no longer compatible with any versions below 0.12.
# ----------------------------------------------------------------------------------------------------------------------
terraform {
required_version = ">= 0.12"
}

# ----------------------------------------------------------------------------------------------------------------------
# DEPLOY DEDICATED NETWORKS ON DOCKER HOSTS
# A dedicated Docker network should be deployed on both hosts for the NS1 containers to join.
# -----------------------------------------------------------------------------------------------------------------------

provider "docker" {
alias = "data01"
host = "${var.docker_protocol}${var.data01_host}"
}

provider "docker" {
alias = "data02"
host = "${var.docker_protocol}${var.data02_host}"
}

provider "docker" {
alias = "data03"
host = "${var.docker_protocol}${var.data03_host}"
}

provider "docker" {
alias = "control01"
host = "${var.docker_protocol}${var.control01_host}"
}

provider "docker" {
alias = "edge01"
host = "${var.docker_protocol}${var.edge01_host}"
}

resource "docker_network" "data01" {
provider = "docker.data01"
name = "ns1"
driver = "bridge"
ipam_driver = "default"
attachable = true

ipam_config {
subnet = "172.18.12.0/24"
}
}

resource "docker_network" "data02" {
provider = "docker.data02"
name = "ns1"
driver = "bridge"
ipam_driver = "default"
attachable = true

ipam_config {
subnet = "172.18.12.0/24"
}
}

resource "docker_network" "data03" {
provider = "docker.data03"
name = "ns1"
driver = "bridge"
ipam_driver = "default"
attachable = true

ipam_config {
subnet = "172.18.12.0/24"
}
}

resource "docker_network" "control01" {
provider = "docker.control01"
name = "ns1"
driver = "bridge"
ipam_driver = "default"
attachable = true

ipam_config {
subnet = "172.18.12.0/24"
}
}

resource "docker_network" "edge01" {
provider = "docker.edge01"
name = "ns1"
driver = "bridge"
ipam_driver = "default"
attachable = true

ipam_config {
subnet = "172.18.12.0/24"
}
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CONTAINERS ON THE DATA HOSTS
# This configuration assumes the containers have already been loaded on the Docker host.
# If they have not been loaded, it will attempt to download them from Docker Hub and fail.
# ---------------------------------------------------------------------------------------------------------------------

module "data01" {
source = "../../modules/data"
docker_host = "${var.docker_protocol}${var.data01_host}"
docker_network = docker_network.data01.name
docker_image_username = var.docker_image_username
docker_image_repository = "${var.docker_image_repository}_data"
docker_image_tag = var.docker_image_tag
docker_registry_address = var.docker_registry_address
docker_registry_username = var.docker_registry_username
docker_registry_password = var.docker_registry_password
pop_id = var.data01_pop_id
server_id = var.data01_host
primary = false
cluster_id = 1
cluster_size = 3
data_peers = [var.data02_host, var.data03_host]
telegraf_output_elasticsearch_data_host = var.elasticsearch_data_host
telegraf_output_elasticsearch_index = var.elasticsearch_index
}

module "data02" {
source = "../../modules/data"
docker_host = "${var.docker_protocol}${var.data02_host}"
docker_network = docker_network.data02.name
docker_image_username = var.docker_image_username
docker_image_repository = "${var.docker_image_repository}_data"
docker_image_tag = var.docker_image_tag
docker_registry_address = var.docker_registry_address
docker_registry_username = var.docker_registry_username
docker_registry_password = var.docker_registry_password
pop_id = var.data02_pop_id
server_id = var.data02_host
primary = false
cluster_id = 2
cluster_size = 3
data_peers = [var.data01_host, var.data03_host]
telegraf_output_elasticsearch_data_host = var.elasticsearch_data_host
telegraf_output_elasticsearch_index = var.elasticsearch_index
}

module "data03" {
source = "../../modules/data"
docker_host = "${var.docker_protocol}${var.data03_host}"
docker_network = docker_network.data03.name
docker_image_username = var.docker_image_username
docker_image_repository = "${var.docker_image_repository}_data"
docker_image_tag = var.docker_image_tag
docker_registry_address = var.docker_registry_address
docker_registry_username = var.docker_registry_username
docker_registry_password = var.docker_registry_password
pop_id = var.data03_pop_id
server_id = var.data03_host
primary = false
cluster_id = 3
cluster_size = 3
data_peers = [var.data01_host, var.data02_host]
telegraf_output_elasticsearch_data_host = var.elasticsearch_data_host
telegraf_output_elasticsearch_index = var.elasticsearch_index
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CONTAINERS ON THE CONTROL HOST
# This configuration assumes the containers have already been loaded on the Docker host.
# If they have not been loaded, it will attempt to download them from Docker Hub and fail.
# ---------------------------------------------------------------------------------------------------------------------

module "core" {
source = "../../modules/core"
docker_host = "${var.docker_protocol}${var.control01_host}"
docker_network = docker_network.control01.name
docker_image_username = var.docker_image_username
docker_image_repository = "${var.docker_image_repository}_core"
docker_image_tag = var.docker_image_tag
docker_registry_address = var.docker_registry_address
docker_registry_username = var.docker_registry_username
docker_registry_password = var.docker_registry_password
bootstrappable = false
pop_id = var.control01_pop_id
server_id = var.control01_host
data_hosts = [var.data01_host,var.data02_host,var.data03_host]
api_fqdn = var.api_fqdn
portal_fqdn = var.portal_fqdn
nameservers = var.nameservers
hostmaster_email = var.hostmaster_email
}

module "xfr" {
source = "../../modules/xfr"
docker_host = "${var.docker_protocol}${var.control01_host}"
docker_network = docker_network.control01.name
docker_image_username = var.docker_image_username
docker_image_repository = "${var.docker_image_repository}_xfr"
docker_image_tag = var.docker_image_tag
docker_registry_address = var.docker_registry_address
docker_registry_username = var.docker_registry_username
docker_registry_password = var.docker_registry_password
pop_id = var.control01_pop_id
server_id = var.control01_host
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CONTAINERS ON THE EDGE HOST
# The configuration of these modules provides an example of using the moddules with a Docker registry.
# The NS1 Docker containers should already be loaded into the registry at run time and the registry must be reachable
# from the Docker host.
# ---------------------------------------------------------------------------------------------------------------------
module "dns" {
source = "../../modules/dns"
docker_host = "${var.docker_protocol}${var.edge01_host}"
docker_network = docker_network.edge01.name
docker_image_username = var.docker_image_username
docker_image_repository = "${var.docker_image_repository}_dns"
docker_image_tag = var.docker_image_tag
docker_registry_address = var.docker_registry_address
docker_registry_username = var.docker_registry_username
docker_registry_password = var.docker_registry_password
pop_id = var.edge01_pop_id
server_id = var.edge01_host
}

module "dhcp" {
source = "../../modules/dhcp"
docker_host = "${var.docker_protocol}${var.edge01_host}"
docker_network = docker_network.edge01.name
docker_image_username = var.docker_image_username
docker_image_repository = "${var.docker_image_repository}_dhcp"
docker_image_tag = var.docker_image_tag
docker_registry_address = var.docker_registry_address
docker_registry_username = var.docker_registry_username
docker_registry_password = var.docker_registry_password
pop_id = var.edge01_pop_id
server_id = var.edge01_host
}

module "dist" {
source = "../../modules/dist"
docker_host = "${var.docker_protocol}${var.edge01_host}"
docker_network = docker_network.edge01.name
docker_image_username = var.docker_image_username
docker_image_repository = "${var.docker_image_repository}_dist"
docker_image_tag = var.docker_image_tag
docker_registry_address = var.docker_registry_address
docker_registry_username = var.docker_registry_username
docker_registry_password = var.docker_registry_password
# This transforms the user defined URL defined for the control host into
# a FQDN or IP, which is expected by the `core_hosts` argument
core_hosts = [element(split("@", var.control01_host), 1)]
pop_id = var.edge01_pop_id
server_id = var.edge01_host
}
106 changes: 106 additions & 0 deletions terraform/examples/multi-node-cluster/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
variable "docker_registry_username" {
description = "Username for authentication to Docker registry."
}

variable "docker_registry_password" {
description = "Password for authentication to Docker registry."
}

variable "docker_registry_address" {
description = "The absolute URL of the Docker registry (i.e. 'https://registry.hub.docker.com') to pull the container images from."
}

variable "docker_image_tag" {
default = "2.1.1"
description = "The image tag of the Docker image. Defaults to the latest GA version number."
}

variable "docker_image_username" {
default = "ns1inc"
description = "The username used in the Docker image name. This should not need to be changed."
}

variable "docker_image_repository" {
default = "privatedns_data"
description = "The repository name used in the Docker image name. This should not need to be changed."
}

variable "docker_protocol" {
default = "ssh://"
description = "The protocol to use when connecting to the docker host"
}

variable "data01_host" {
description = "The address of the Docker host to deploy the first Data container on (i.e. 'ssh://user@remote-host'). Both ssh:// and tcp:// are supported. See https://www.terraform.io/docs/providers/docker/index.html for more details"
}

variable "data02_host" {
description = "The address of the Docker host to deploy the second Data container on (i.e. 'ssh://user@remote-host'). Both ssh:// and tcp:// are supported. See https://www.terraform.io/docs/providers/docker/index.html for more details"
}

variable "data03_host" {
description = "The address of the Docker host to deploy the third Data container on (i.e. 'ssh://user@remote-host'). Both ssh:// and tcp:// are supported. See https://www.terraform.io/docs/providers/docker/index.html for more details"
}

variable "control01_host" {
description = "The address of the Docker host to deploy the Data, Core and XFR containers on (i.e. 'ssh://user@remote-host'). Both ssh:// and tcp:// are supported. See https://www.terraform.io/docs/providers/docker/index.html for more details"
}

variable "edge01_host" {
description = "The address of the Docker host to deploy the DHCP, DNS and Dist containers on (i.e. 'ssh://user@remote-host'). Both ssh:// and tcp:// are supported. See https://www.terraform.io/docs/providers/docker/index.html for more details"
}

variable "data01_pop_id" {
description = "The pop id for data01"
default = "dc1"
}

variable "data02_pop_id" {
description = "The pop id for data02"
default = "dc1"
}

variable "data03_pop_id" {
description = "The pop id for data03"
default = "dc1"
}

variable "control01_pop_id" {
description = "The pop id for control01"
default = "dc1"
}

variable "edge01_pop_id" {
description = "The pop id for edge01"
default = "dc2"
}

variable "elasticsearch_data_host" {
default = null
description = "The elasticsearch host to export metrics"
}

variable "elasticsearch_index" {
default = null
description = "The elasticsearch index to use when exporting metrics"
}

variable "api_fqdn" {
default = "api.mycompany.net"
description = "FQDN to use for the api and feed URLs"
}

variable "portal_fqdn" {
default = "portal.mycompany.net"
description = "FQDN to use for the portal URL"
}

variable "nameservers" {
default = "ns1.mycompany.net"
description = "Nameservers used in SOA records"
}

variable "hostmaster_email" {
default = "hostmaster@mycompany.net"
description = "Hostmaster email address used in SOA records"
}