Skip to content

A Terraform module for provisioning and installing Boundary Enterprise Worker on Azure virtual machines as described in HashiCorp Validated Designs

License

Notifications You must be signed in to change notification settings

hashicorp/terraform-azurerm-boundary-enterprise-worker-hvd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Boundary Enterprise Worker HVD on Azure VM

Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Boundary Enterprise Worker(s) on Microsoft Azure using Azure Virtual Machines. This module is designed to work with the complimentary Boundary Enterprise Controller HVD on Azure VM module.

Prerequisites

General

  • Terraform CLI >= 1.9 installed on workstation
  • Azure subscription that Boundary Controller will be hosted in with admin-like permissions to provision resources in via Terraform CLI
  • Azure Blob Storage Account for AzureRM Remote State backend is recommended but not required
  • Git CLI and Visual Studio Code editor installed on workstation are recommended but not required

Networking

  • Azure VNet ID
  • Worker subnet ID with service endpoints enabled for Microsoft.KeyVault
  • Worker subnet requires access to the subnet(s) that contain either the controller(s) or upstream worker(s)
  • Load balancer subnet ID for proxy lb if it will be deployed.
  • Load balancer static IP address for proxy LB if it will be deployed.
  • Network Security Group (NSG)/firewall rules:
    • Allow TCP/9202 ingress from subnets that will contain Boundary worker(s) that will connect to workers deployed by this module
    • Allow TCP/9202 ingress from subnets that will contain Boundary clients that will use these workers deployed by this module.

Key Vault

  • Azure Key Vault containing the worker-auth key deployed by the Boundary controller module, unless connecting to HCP Boundary
    • πŸ“ Note: This module will create a MSI and Key Vault Access policy on the Key Vault specified.

  • A mechanism for shell access to Azure Linux VMs within VMSS (SSH key pair, bastion host, username/password, etc.)

Compute

One of the following mechanisms for shell access to Boundary instances:

  • A mechanism for shell access to Azure Linux VMs within VMSS (SSH key pair, bastion host, username/password, etc.)

Boundary

Unless deploying a Boundary HCP Worker, you will require a Boundary Enterprise Cluster deployed using the Boundary Enterprise Controller HVD on Azure VM module.

Usage - Boundary Enterprise

  1. Create/configure/validate the applicable prerequisites.

  2. Referencing the examples directory, copy the Terraform files from your scenario of choice into an appropriate destination to create your own root Terraform configuration. Populate your own custom values in the example terraform.tfvars provided within the subdirectory of your scenario of choice (example here) file and remove the .example file extension.

    πŸ“ Note: The friendly_name_prefix variable should be unique for every agent deployment.

  3. Update the backend.tf file within your newly created Terraform root configuration with your AzureRM remote state backend configuration values.

  4. Run terraform init and terraform apply against your newly created Terraform root configuration.

  5. After the terraform apply finishes successfully, you can monitor the install progress by connecting to the VM in your Boundary worker Virtual Machine Scaleset (VMSS) via SSH and observing the cloud-init logs:

    tail -f /var/log/boundary-cloud-init.log
    
    journalctl -xu cloud-final -f
  6. Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:

    sudo systemctl status boundary
  7. Worker should show up in Boundary console

Usage - HCP Boundary

  1. In HCP Boundary go to Workers and start creating a new worker. Copy the Boundary Cluster ID.

  2. Create/configure/validate the applicable prerequisites.

  3. Referencing the examples directory, copy the Terraform files from your scenario of choice into an appropriate destination to create your own root Terraform configuration. Populate your own custom values in the example terraform.tfvars provided within the subdirectory of your scenario of choice (example here) file and remove the .example file extension. Set the hcp_boundary_cluster_id variable with the Boundary Cluster ID from step 1.

    πŸ“ Note: The friendly_name_prefix variable should be unique for every agent deployment.

  4. Update the backend.tf file within your newly created Terraform root configuration with your AzureRM remote state backend configuration values.

  5. Run terraform init and terraform apply against your newly created Terraform root configuration.

  6. After the terraform apply finishes successfully, you can monitor the install progress by connecting to the VM in your Boundary worker Virtual Machine Scaleset (VMSS) via SSH and observing the cloud-init logs:

    tail -f /var/log/boundary-cloud-init.log
    
    journalctl -xu cloud-final -f
  7. Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:

    sudo systemctl status boundary
  8. While still connected via SSH to the Boundary Worker, sudo journalctl -xu boundary to review the Boundary Logs.

  9. Copy the Worker Auth Registration Request string and paste this into the Worker Auth Registration Request field of the new Boundary Worker in the HCP console and click Register Worker.

  10. Worker should show up in HCP Boundary console

Docs

Below are links to docs pages related to deployment customizations and day 2 operations of your Boundary Controller instance.

Module support

This open source software is maintained by the HashiCorp Technical Field Organization, independently of our enterprise products. While our Support Engineering team provides dedicated support for our enterprise offerings, this open source software is not included.

  • For help using this open source software, please engage your account team.
  • To report bugs/issues with this open source software, please open them directly against this code repository using the GitHub issues feature.

Please note that there is no official Service Level Agreement (SLA) for support of this software as a HashiCorp customer. This software falls under the definition of Community Software/Versions in your Agreement. We appreciate your understanding and collaboration in improving our open source projects.

Requirements

Name Version
terraform >= 1.9
azurerm ~> 3.101

Providers

Name Version
azurerm ~> 3.101

Resources

Name Type
azurerm_key_vault_access_policy.worker_key_vault_worker resource
azurerm_lb.boundary_proxy resource
azurerm_lb_backend_address_pool.boundary_proxy resource
azurerm_lb_probe.boundary_proxy resource
azurerm_lb_rule.boundary_proxy resource
azurerm_linux_virtual_machine_scale_set.boundary resource
azurerm_resource_group.boundary resource
azurerm_role_assignment.boundary_kv_reader resource
azurerm_role_assignment.boundary_vmss_disk_encryption_set_reader resource
azurerm_user_assigned_identity.boundary resource
azurerm_client_config.current data source
azurerm_disk_encryption_set.vmss data source
azurerm_image.custom data source
azurerm_key_vault.worker data source

Inputs

Name Description Type Default Required
additional_package_names List of additional repository package names to install set(string) [] no
availability_zones List of Azure Availability Zones to spread boundary resources across. set(string)
[
"1",
"2",
"3"
]
no
boundary_upstream List of IP addresses or FQDNs for the worker to initially connect to. This could be a controller or worker. This is not used when connecting to HCP Boundary. list(string) null no
boundary_upstream_port Port for the worker to connect to. Typically 9021 to connect to a controller, 9202 to a worker. number 9202 no
boundary_version Version of Boundary to install. string "0.17.1+ent" no
common_tags Map of common tags for taggable Azure resources. map(string) {} no
create_lb Boolean to create a Network Load Balancer for Boundary. Should be true if downstream workers will connect to these workers. bool false no
create_resource_group Boolean to create a new Resource Group for this boundary deployment. bool true no
friendly_name_prefix Friendly name prefix for uniquely naming Azure resources. This should be unique across all deployments string n/a yes
hcp_boundary_cluster_id ID of the Boundary cluster in HCP. Only used when using HCP Boundary. string "" no
is_govcloud_region Boolean indicating whether this boundary deployment is in an Azure Government Cloud region. bool false no
lb_private_ip Private IP address for internal Azure Load Balancer. string null no
lb_subnet_id Subnet ID for worker proxy load balancer. string null no
location Azure region for this boundary deployment. string n/a yes
resource_group_name Name of Resource Group to create. string "boundary-worker-rg" no
vm_admin_username Admin username for VMs in VMSS. string "boundaryadmin" no
vm_custom_image_name Name of custom VM image to use for VMSS. If not using a custom image, leave this blank. string null no
vm_custom_image_rg_name Resource Group name where the custom VM image resides. Only valid if vm_custom_image_name is not null. string null no
vm_disk_encryption_set_name Name of the Disk Encryption Set to use for VMSS. string null no
vm_disk_encryption_set_rg Name of the Resource Group where the Disk Encryption Set to use for VMSS exists. string null no
vm_enable_boot_diagnostics Boolean to enable boot diagnostics for VMSS. bool false no
vm_image_offer Offer of the VM image. string "0001-com-ubuntu-server-jammy" no
vm_image_publisher Publisher of the VM image. string "Canonical" no
vm_image_sku SKU of the VM image. string "22_04-lts-gen2" no
vm_image_version Version of the VM image. string "latest" no
vm_sku SKU for VM size for the VMSS. Regions may have different skus available string "Standard_D2s_v5" no
vm_ssh_public_key SSH public key for VMs in VMSS. string null no
vmss_availability_zones List of Azure Availability Zones to spread the VMSS VM resources across. set(string)
[
"1",
"2",
"3"
]
no
vmss_vm_count Number of VM instances in the VMSS. number 1 no
worker_is_internal Boolean to create give the worker an internal IP address only or give it an external IP address. bool true no
worker_keyvault_name Name of the Key Vault that contains the worker key to use. string "" no
worker_keyvault_rg_name Name of the Resource Group where the 'worker' Key Vault resides. string "" no
worker_subnet_id Subnet ID for worker VMs. string n/a yes
worker_tags Map of extra tags to apply to Boundary Worker Configuration. var.common_tags will be merged with this map. map(string) {} no

Outputs

Name Description
proxy_lb_ip_address Private IP address of the Boundary proxy Load Balancer.

About

A Terraform module for provisioning and installing Boundary Enterprise Worker on Azure virtual machines as described in HashiCorp Validated Designs

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published