Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Boundary Enterprise Worker(s) on Microsoft Azure using Azure Virtual Machines. This module is designed to work with the complimentary Boundary Enterprise Controller HVD on Azure VM module.
- Terraform CLI
>= 1.9installed on workstation - Azure subscription that Boundary Controller will be hosted in with admin-like permissions to provision resources in via Terraform CLI
- Azure Blob Storage Account for AzureRM Remote State backend is recommended but not required
GitCLI and Visual Studio Code editor installed on workstation are recommended but not required
- Azure VNet ID
- Worker subnet ID with service endpoints enabled for
Microsoft.KeyVault - Worker subnet requires access to the subnet(s) that contain either the controller(s) or upstream worker(s)
- Load balancer subnet ID for proxy lb if it will be deployed.
- Load balancer static IP address for proxy LB if it will be deployed.
- Network Security Group (NSG)/firewall rules:
- Allow
TCP/9202ingress from subnets that will contain Boundary worker(s) that will connect to workers deployed by this module - Allow
TCP/9202ingress from subnets that will contain Boundary clients that will use these workers deployed by this module.
- Allow
- Azure Key Vault containing the worker-auth key deployed by the Boundary controller module, unless connecting to HCP Boundary
-
π Note: This module will create a MSI and Key Vault Access policy on the Key Vault specified.
-
- A mechanism for shell access to Azure Linux VMs within VMSS (SSH key pair, bastion host, username/password, etc.)
One of the following mechanisms for shell access to Boundary instances:
- A mechanism for shell access to Azure Linux VMs within VMSS (SSH key pair, bastion host, username/password, etc.)
Unless deploying a Boundary HCP Worker, you will require a Boundary Enterprise Cluster deployed using the Boundary Enterprise Controller HVD on Azure VM module.
-
Create/configure/validate the applicable prerequisites.
-
Referencing the examples directory, copy the Terraform files from your scenario of choice into an appropriate destination to create your own root Terraform configuration. Populate your own custom values in the example terraform.tfvars provided within the subdirectory of your scenario of choice (example here) file and remove the
.examplefile extension.π Note: The
friendly_name_prefixvariable should be unique for every agent deployment. -
Update the backend.tf file within your newly created Terraform root configuration with your AzureRM remote state backend configuration values.
-
Run
terraform initandterraform applyagainst your newly created Terraform root configuration. -
After the
terraform applyfinishes successfully, you can monitor the install progress by connecting to the VM in your Boundary worker Virtual Machine Scaleset (VMSS) via SSH and observing the cloud-init logs:tail -f /var/log/boundary-cloud-init.log journalctl -xu cloud-final -f
-
Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:
sudo systemctl status boundary
-
Worker should show up in Boundary console
-
In HCP Boundary go to
Workersand start creating a new worker. Copy theBoundary Cluster ID. -
Create/configure/validate the applicable prerequisites.
-
Referencing the examples directory, copy the Terraform files from your scenario of choice into an appropriate destination to create your own root Terraform configuration. Populate your own custom values in the example terraform.tfvars provided within the subdirectory of your scenario of choice (example here) file and remove the
.examplefile extension. Set thehcp_boundary_cluster_idvariable with the Boundary Cluster ID from step 1.π Note: The
friendly_name_prefixvariable should be unique for every agent deployment. -
Update the backend.tf file within your newly created Terraform root configuration with your AzureRM remote state backend configuration values.
-
Run
terraform initandterraform applyagainst your newly created Terraform root configuration. -
After the
terraform applyfinishes successfully, you can monitor the install progress by connecting to the VM in your Boundary worker Virtual Machine Scaleset (VMSS) via SSH and observing the cloud-init logs:tail -f /var/log/boundary-cloud-init.log journalctl -xu cloud-final -f
-
Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:
sudo systemctl status boundary
-
While still connected via SSH to the Boundary Worker,
sudo journalctl -xu boundaryto review the Boundary Logs. -
Copy the
Worker Auth Registration Requeststring and paste this into theWorker Auth Registration Requestfield of the new Boundary Worker in the HCP console and clickRegister Worker. -
Worker should show up in HCP Boundary console
Below are links to docs pages related to deployment customizations and day 2 operations of your Boundary Controller instance.
- Deployment Customizations
- Upgrading Boundary version
- Updating/modifying Boundary configuration settings
- Deploying in Azure GovCloud
This open source software is maintained by the HashiCorp Technical Field Organization, independently of our enterprise products. While our Support Engineering team provides dedicated support for our enterprise offerings, this open source software is not included.
- For help using this open source software, please engage your account team.
- To report bugs/issues with this open source software, please open them directly against this code repository using the GitHub issues feature.
Please note that there is no official Service Level Agreement (SLA) for support of this software as a HashiCorp customer. This software falls under the definition of Community Software/Versions in your Agreement. We appreciate your understanding and collaboration in improving our open source projects.
| Name | Version |
|---|---|
| terraform | >= 1.9 |
| azurerm | ~> 3.101 |
| Name | Version |
|---|---|
| azurerm | ~> 3.101 |
| Name | Type |
|---|---|
| azurerm_key_vault_access_policy.worker_key_vault_worker | resource |
| azurerm_lb.boundary_proxy | resource |
| azurerm_lb_backend_address_pool.boundary_proxy | resource |
| azurerm_lb_probe.boundary_proxy | resource |
| azurerm_lb_rule.boundary_proxy | resource |
| azurerm_linux_virtual_machine_scale_set.boundary | resource |
| azurerm_resource_group.boundary | resource |
| azurerm_role_assignment.boundary_kv_reader | resource |
| azurerm_role_assignment.boundary_vmss_disk_encryption_set_reader | resource |
| azurerm_user_assigned_identity.boundary | resource |
| azurerm_client_config.current | data source |
| azurerm_disk_encryption_set.vmss | data source |
| azurerm_image.custom | data source |
| azurerm_key_vault.worker | data source |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| additional_package_names | List of additional repository package names to install | set(string) |
[] |
no |
| availability_zones | List of Azure Availability Zones to spread boundary resources across. | set(string) |
[ |
no |
| boundary_upstream | List of IP addresses or FQDNs for the worker to initially connect to. This could be a controller or worker. This is not used when connecting to HCP Boundary. | list(string) |
null |
no |
| boundary_upstream_port | Port for the worker to connect to. Typically 9021 to connect to a controller, 9202 to a worker. | number |
9202 |
no |
| boundary_version | Version of Boundary to install. | string |
"0.17.1+ent" |
no |
| common_tags | Map of common tags for taggable Azure resources. | map(string) |
{} |
no |
| create_lb | Boolean to create a Network Load Balancer for Boundary. Should be true if downstream workers will connect to these workers. | bool |
false |
no |
| create_resource_group | Boolean to create a new Resource Group for this boundary deployment. | bool |
true |
no |
| friendly_name_prefix | Friendly name prefix for uniquely naming Azure resources. This should be unique across all deployments | string |
n/a | yes |
| hcp_boundary_cluster_id | ID of the Boundary cluster in HCP. Only used when using HCP Boundary. | string |
"" |
no |
| is_govcloud_region | Boolean indicating whether this boundary deployment is in an Azure Government Cloud region. | bool |
false |
no |
| lb_private_ip | Private IP address for internal Azure Load Balancer. | string |
null |
no |
| lb_subnet_id | Subnet ID for worker proxy load balancer. | string |
null |
no |
| location | Azure region for this boundary deployment. | string |
n/a | yes |
| resource_group_name | Name of Resource Group to create. | string |
"boundary-worker-rg" |
no |
| vm_admin_username | Admin username for VMs in VMSS. | string |
"boundaryadmin" |
no |
| vm_custom_image_name | Name of custom VM image to use for VMSS. If not using a custom image, leave this blank. | string |
null |
no |
| vm_custom_image_rg_name | Resource Group name where the custom VM image resides. Only valid if vm_custom_image_name is not null. |
string |
null |
no |
| vm_disk_encryption_set_name | Name of the Disk Encryption Set to use for VMSS. | string |
null |
no |
| vm_disk_encryption_set_rg | Name of the Resource Group where the Disk Encryption Set to use for VMSS exists. | string |
null |
no |
| vm_enable_boot_diagnostics | Boolean to enable boot diagnostics for VMSS. | bool |
false |
no |
| vm_image_offer | Offer of the VM image. | string |
"0001-com-ubuntu-server-jammy" |
no |
| vm_image_publisher | Publisher of the VM image. | string |
"Canonical" |
no |
| vm_image_sku | SKU of the VM image. | string |
"22_04-lts-gen2" |
no |
| vm_image_version | Version of the VM image. | string |
"latest" |
no |
| vm_sku | SKU for VM size for the VMSS. Regions may have different skus available | string |
"Standard_D2s_v5" |
no |
| vm_ssh_public_key | SSH public key for VMs in VMSS. | string |
null |
no |
| vmss_availability_zones | List of Azure Availability Zones to spread the VMSS VM resources across. | set(string) |
[ |
no |
| vmss_vm_count | Number of VM instances in the VMSS. | number |
1 |
no |
| worker_is_internal | Boolean to create give the worker an internal IP address only or give it an external IP address. | bool |
true |
no |
| worker_keyvault_name | Name of the Key Vault that contains the worker key to use. | string |
"" |
no |
| worker_keyvault_rg_name | Name of the Resource Group where the 'worker' Key Vault resides. | string |
"" |
no |
| worker_subnet_id | Subnet ID for worker VMs. | string |
n/a | yes |
| worker_tags | Map of extra tags to apply to Boundary Worker Configuration. var.common_tags will be merged with this map. | map(string) |
{} |
no |
| Name | Description |
|---|---|
| proxy_lb_ip_address | Private IP address of the Boundary proxy Load Balancer. |