Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Boundary Enterprise Worker(s) on Google Cloud Platform (GCP) using Compute Engine instances. This module is designed to work with the complimentary Boundary Enterprise Controller HVD on GCP GCE module.
- Terraform CLI
>= 1.9installed on workstations. GitCLI and Visual Studio Code editor installed on workstations are strongly recommended.- Google account that Boundary will be hosted in with permissions to provision these resources via Terraform CLI.
- (Optional) Google GCS for GCS Remote State backend that will solely be used to stand up the Boundary infrastructure via Terraform CLI (Community Edition).
- GCP Project Created
- Following APIs enabled
- secretmanager.googleapis.com
- compute.googleapis.com
- cloudkms.googleapis.com
- Google VPC
- Subnet
- Private Service Access Configured
- Firewall rules will be created with this Module. If that is not possible (shared VPC) then the firewall rules in this module will need to be created in the shared VPC
- Boundary Network connectivity
One of the following mechanisms for shell access to Boundary VM instances:
- Ability to enable Google IAP (this module supports this via a boolean input variable).
- SSH key and user
Unless deploying a Boundary HCP Worker, you will require a Boundary Enterprise Cluster deployed using the Boundary Enterprise Controller HVD on GCP GCE module.
-
Create/configure/validate the applicable prerequisites.
-
Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to call and deploy this module. To get started, choose an example scenario. If you are not sure which example scenario to start with, then we recommend starting with the ingress example.
π Note: The
friendly_name_prefixvariable should be unique for every agent deployment. -
Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your root Terraform configuration that will manage your Boundary deployment. If you are not sure where to create this new directory, it is common for us to see users create an
environments/directory at the root of this repo, and then a subdirectory for each Boundary instance deployment, like so:. βββ environments βββ production βΒ Β βββ backend.tf βΒ Β βββ main.tf βΒ Β βββ outputs.tf βΒ Β βββ terraform.tfvars βΒ Β βββ variables.tf βββ sandbox βββ backend.tf βββ main.tf βββ outputs.tf βββ terraform.tfvars βββ variables.tfπ Note: in this example, the user will have two separate Boundary deployments; one for their
sandboxenvironment, and one for theirproductionenvironment. This is recommended, but not required. -
(Optional) Uncomment and update the gcs remote state backend configuration provided in the
backend.tffile with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your Boundary deployment. -
Populate your own custom values into the
terraform.tfvars.examplefile that was provided, and remove the.examplefile extension such that the file is now namedterraform.tfvars. -
Navigate to the directory of your newly created Terraform configuration for your Boundary Worker deployment, and run
terraform init,terraform plan, andterraform apply. -
After your
terraform applyfinishes successfully, you can monitor the installation progress by connecting to your Boundary VM instance shell via SSH or Google IAP and observing the cloud-init (user_data) logs:Higher-level logs:
tail -f /var/log/boundary-cloud-init.log
Lower-level logs:
journalctl -xu cloud-final -f
π Note: the
-fargument is to follow the logs as they append in real-time, and is optional. You may remove the-ffor a static view.The log files should display the following message after the cloud-init (user_data) script finishes successfully:
[INFO] boundary_custom_data script finished successfully! -
Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:
sudo systemctl status boundary
-
After the Boundary Worker is deployed the Boundary worker should show up in the Boundary Clusters workers
-
In HCP Boundary go to
Workersand start creating a new worker. Copy theBoundary Cluster ID. -
Create/configure/validate the applicable prerequisites.
-
Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to call and deploy this module. To get started, choose an example scenario. If you are not sure which example scenario to start with, then we recommend starting with the default example.
-
Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your root Terraform configuration that will manage your Boundary deployment. If you are not sure where to create this new directory, it is common for us to see users create an
environments/directory at the root of this repo, and then a subdirectory for each Boundary instance deployment, like so:. βββ environments βββ production βΒ Β βββ backend.tf βΒ Β βββ main.tf βΒ Β βββ outputs.tf βΒ Β βββ terraform.tfvars βΒ Β βββ variables.tf βββ sandbox βββ backend.tf βββ main.tf βββ outputs.tf βββ terraform.tfvars βββ variables.tfπ Note: in this example, the user will have two separate Boundary deployments; one for their
sandboxenvironment, and one for theirproductionenvironment. This is recommended, but not required. -
(Optional) Uncomment and update the gcs remote state backend configuration provided in the
backend.tffile with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your Boundary deployment. -
Populate your own custom values into the
terraform.tfvars.examplefile that was provided, and remove the.examplefile extension such that the file is now namedterraform.tfvars. Ensure to set thehcp_boundary_cluster_idvariable with the Boundary Cluster ID from step 1. -
Navigate to the directory of your newly created Terraform configuration for your Boundary Worker deployment, and run
terraform init,terraform plan, andterraform apply. -
After your
terraform applyfinishes successfully, you can monitor the installation progress by connecting to your Boundary VM instance shell via SSH or Google IAP and observing the cloud-init (user_data) logs:Higher-level logs:
tail -f /var/log/boundary-cloud-init.log
Lower-level logs:
journalctl -xu cloud-final -f
π Note: the
-fargument is to follow the logs as they append in real-time, and is optional. You may remove the-ffor a static view.The log files should display the following message after the cloud-init (user_data) script finishes successfully:
[INFO] boundary_custom_data script finished successfully! -
Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:
sudo systemctl status boundary
-
While still connected to the Boundary Worker,
sudo journalctl -xu boundaryto review the Boundary Logs. -
Copy the
Worker Auth Registration Requeststring and paste this into theWorker Auth Registration Requestfield of the new Boundary Worker in the HCP console and clickRegister Worker. -
Worker should show up in HCP Boundary console
Below are links to docs pages related to deployment customizations and day 2 operations of your Boundary Worker instance.
- Deployment Customizations
- Upgrading Boundary version
- Updating/modifying Boundary configuration settings
This open source software is maintained by the HashiCorp Technical Field Organization, independently of our enterprise products. While our Support Engineering team provides dedicated support for our enterprise offerings, this open source software is not included.
- For help using this open source software, please engage your account team.
- To report bugs/issues with this open source software, please open them directly against this code repository using the GitHub issues feature.
Please note that there is no official Service Level Agreement (SLA) for support of this software as a HashiCorp customer. This software falls under the definition of Community Software/Versions in your Agreement. We appreciate your understanding and collaboration in improving our open source projects.
| Name | Version |
|---|---|
| terraform | ~> 1.9 |
| ~> 5.39 | |
| google-beta | ~> 5.39 |
| random | ~> 3.6 |
| Name | Version |
|---|---|
| cloudinit | n/a |
| ~> 5.39 |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| additional_package_names | List of additional repository package names to install | set(string) |
[] |
no |
| boundary_upstream | List of FQDNs or IP addresses for the worker to connect to. | list(string) |
null |
no |
| boundary_upstream_port | Port for the worker to connect to. | number |
9201 |
no |
| boundary_version | Version of Boundary to install. | string |
"0.17.1+ent" |
no |
| cidr_ingress_9202_allow | CIDR ranges to allow 9202 traffic inbound to Boundary instance(s). | list(string) |
null |
no |
| cidr_ingress_ssh_allow | CIDR ranges to allow SSH traffic inbound to Boundary instance(s) via IAP tunnel. | list(string) |
null |
no |
| common_labels | Common labels to apply to GCP resources. | map(string) |
{} |
no |
| create_lb | Boolean to create a Network Load Balancer for Boundary. Should be true if downstream workers will connect to these workers. | bool |
false |
no |
| disk_size_gb | Size in Gigabytes of root disk of Boundary instance(s). | number |
50 |
no |
| enable_iap | (Optional bool) Enable https://cloud.google.com/iap/docs/using-tcp-forwarding#console, defaults to true. |
bool |
true |
no |
| enable_session_recording | Boolean to enable session recording in Boundary. | bool |
false |
no |
| friendly_name_prefix | Friendly name prefix used for uniquely naming resources. This should be unique across all deployments | string |
n/a | yes |
| hcp_boundary_cluster_id | ID of the Boundary cluster in HCP. Only used when using HCP Boundary. | string |
null |
no |
| image_name | VM image for Boundary instance(s). | string |
"ubuntu-2404-noble-amd64-v20240607" |
no |
| image_project | ID of project in which the resource belongs. | string |
"ubuntu-os-cloud" |
no |
| initial_delay_sec | The number of seconds that the managed instance group waits before it applies autohealing policies to new instances or recently recreated instances | number |
1200 |
no |
| instance_count | Target size of Managed Instance Group for number of Boundary instances to run. Only specify a value greater than 1 if enable_active_active is set to true. |
number |
1 |
no |
| key_name | Name of Worker KMS key. | string |
null |
no |
| key_ring_location | Location of KMS key ring. If not set, the region of the Boundary deployment will be used. | string |
null |
no |
| key_ring_name | Name of KMS key ring. | string |
null |
no |
| machine_type | (Optional string) Size of machine to create. Default n2-standard-4 from https://cloud.google.com/compute/docs/machine-resource. |
string |
"n2-standard-4" |
no |
| project_id | ID of GCP Project to create resources in. | string |
n/a | yes |
| region | Region of GCP Project to create resources in. | string |
n/a | yes |
| subnet_name | Existing VPC subnetwork for Boundary instance(s) and optionally Boundary frontend load balancer. | string |
n/a | yes |
| vpc | Existing VPC network to deploy Boundary resources into. | string |
n/a | yes |
| vpc_project_id | ID of GCP Project where the existing VPC resides if it is different than the default project. | string |
null |
no |
| worker_is_internal | Boolean to create give the worker an internal IP address only or give it an external IP address. | bool |
true |
no |
| worker_tags | Map of extra tags to apply to Boundary Worker Configuration. var.common_labels will be merged with this map. | map(string) |
{} |
no |
| Name | Description |
|---|---|
| proxy_lb_ip_address | IP Address of the Proxy Load Balancer. |