Skip to content

A Terraform module for provisioning and installing Boundary Enterprise Worker on Google Compute Engine as described in HashiCorp Validated Designs

License

Notifications You must be signed in to change notification settings

hashicorp/terraform-google-boundary-enterprise-worker-hvd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Boundary Enterprise Worker HVD on GCP GCE

Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Boundary Enterprise Worker(s) on Google Cloud Platform (GCP) using Compute Engine instances. This module is designed to work with the complimentary Boundary Enterprise Controller HVD on GCP GCE module.

Prerequisites

General

  • Terraform CLI >= 1.9 installed on workstations.
  • Git CLI and Visual Studio Code editor installed on workstations are strongly recommended.
  • Google account that Boundary will be hosted in with permissions to provision these resources via Terraform CLI.
  • (Optional) Google GCS for GCS Remote State backend that will solely be used to stand up the Boundary infrastructure via Terraform CLI (Community Edition).

Google

  • GCP Project Created
  • Following APIs enabled
    • secretmanager.googleapis.com
    • compute.googleapis.com
    • cloudkms.googleapis.com

Networking

  • Google VPC
    • Subnet
    • Private Service Access Configured
    • Firewall rules will be created with this Module. If that is not possible (shared VPC) then the firewall rules in this module will need to be created in the shared VPC
    • Boundary Network connectivity

Compute

One of the following mechanisms for shell access to Boundary VM instances:

  • Ability to enable Google IAP (this module supports this via a boolean input variable).
  • SSH key and user

Boundary

Unless deploying a Boundary HCP Worker, you will require a Boundary Enterprise Cluster deployed using the Boundary Enterprise Controller HVD on GCP GCE module.

Usage - Boundary Enterprise

  1. Create/configure/validate the applicable prerequisites.

  2. Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to call and deploy this module. To get started, choose an example scenario. If you are not sure which example scenario to start with, then we recommend starting with the ingress example.

    πŸ“ Note: The friendly_name_prefix variable should be unique for every agent deployment.

  3. Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your root Terraform configuration that will manage your Boundary deployment. If you are not sure where to create this new directory, it is common for us to see users create an environments/ directory at the root of this repo, and then a subdirectory for each Boundary instance deployment, like so:

    .
    └── environments
        β”œβ”€β”€ production
        β”‚Β Β  β”œβ”€β”€ backend.tf
        β”‚Β Β  β”œβ”€β”€ main.tf
        β”‚Β Β  β”œβ”€β”€ outputs.tf
        β”‚Β Β  β”œβ”€β”€ terraform.tfvars
        β”‚Β Β  └── variables.tf
        └── sandbox
            β”œβ”€β”€ backend.tf
            β”œβ”€β”€ main.tf
            β”œβ”€β”€ outputs.tf
            β”œβ”€β”€ terraform.tfvars
            └── variables.tf

    πŸ“ Note: in this example, the user will have two separate Boundary deployments; one for their sandbox environment, and one for their production environment. This is recommended, but not required.

  4. (Optional) Uncomment and update the gcs remote state backend configuration provided in the backend.tf file with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your Boundary deployment.

  5. Populate your own custom values into the terraform.tfvars.example file that was provided, and remove the .example file extension such that the file is now named terraform.tfvars.

  6. Navigate to the directory of your newly created Terraform configuration for your Boundary Worker deployment, and run terraform init, terraform plan, and terraform apply.

  7. After your terraform apply finishes successfully, you can monitor the installation progress by connecting to your Boundary VM instance shell via SSH or Google IAP and observing the cloud-init (user_data) logs:

    Higher-level logs:

    tail -f /var/log/boundary-cloud-init.log

    Lower-level logs:

    journalctl -xu cloud-final -f

    πŸ“ Note: the -f argument is to follow the logs as they append in real-time, and is optional. You may remove the -f for a static view.

    The log files should display the following message after the cloud-init (user_data) script finishes successfully:

    [INFO] boundary_custom_data script finished successfully!
  8. Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:

    sudo systemctl status boundary
  9. After the Boundary Worker is deployed the Boundary worker should show up in the Boundary Clusters workers

Usage - HCP Boundary

  1. In HCP Boundary go to Workers and start creating a new worker. Copy the Boundary Cluster ID.

  2. Create/configure/validate the applicable prerequisites.

  3. Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to call and deploy this module. To get started, choose an example scenario. If you are not sure which example scenario to start with, then we recommend starting with the default example.

  4. Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your root Terraform configuration that will manage your Boundary deployment. If you are not sure where to create this new directory, it is common for us to see users create an environments/ directory at the root of this repo, and then a subdirectory for each Boundary instance deployment, like so:

    .
    └── environments
        β”œβ”€β”€ production
        β”‚Β Β  β”œβ”€β”€ backend.tf
        β”‚Β Β  β”œβ”€β”€ main.tf
        β”‚Β Β  β”œβ”€β”€ outputs.tf
        β”‚Β Β  β”œβ”€β”€ terraform.tfvars
        β”‚Β Β  └── variables.tf
        └── sandbox
            β”œβ”€β”€ backend.tf
            β”œβ”€β”€ main.tf
            β”œβ”€β”€ outputs.tf
            β”œβ”€β”€ terraform.tfvars
            └── variables.tf

    πŸ“ Note: in this example, the user will have two separate Boundary deployments; one for their sandbox environment, and one for their production environment. This is recommended, but not required.

  5. (Optional) Uncomment and update the gcs remote state backend configuration provided in the backend.tf file with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your Boundary deployment.

  6. Populate your own custom values into the terraform.tfvars.example file that was provided, and remove the .example file extension such that the file is now named terraform.tfvars. Ensure to set the hcp_boundary_cluster_id variable with the Boundary Cluster ID from step 1.

  7. Navigate to the directory of your newly created Terraform configuration for your Boundary Worker deployment, and run terraform init, terraform plan, and terraform apply.

  8. After your terraform apply finishes successfully, you can monitor the installation progress by connecting to your Boundary VM instance shell via SSH or Google IAP and observing the cloud-init (user_data) logs:

    Higher-level logs:

    tail -f /var/log/boundary-cloud-init.log

    Lower-level logs:

    journalctl -xu cloud-final -f

    πŸ“ Note: the -f argument is to follow the logs as they append in real-time, and is optional. You may remove the -f for a static view.

    The log files should display the following message after the cloud-init (user_data) script finishes successfully:

    [INFO] boundary_custom_data script finished successfully!
  9. Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:

    sudo systemctl status boundary
  10. While still connected to the Boundary Worker, sudo journalctl -xu boundary to review the Boundary Logs.

  11. Copy the Worker Auth Registration Request string and paste this into the Worker Auth Registration Request field of the new Boundary Worker in the HCP console and click Register Worker.

  12. Worker should show up in HCP Boundary console

Docs

Below are links to docs pages related to deployment customizations and day 2 operations of your Boundary Worker instance.

Module support

This open source software is maintained by the HashiCorp Technical Field Organization, independently of our enterprise products. While our Support Engineering team provides dedicated support for our enterprise offerings, this open source software is not included.

  • For help using this open source software, please engage your account team.
  • To report bugs/issues with this open source software, please open them directly against this code repository using the GitHub issues feature.

Please note that there is no official Service Level Agreement (SLA) for support of this software as a HashiCorp customer. This software falls under the definition of Community Software/Versions in your Agreement. We appreciate your understanding and collaboration in improving our open source projects.

Requirements

Name Version
terraform ~> 1.9
google ~> 5.39
google-beta ~> 5.39
random ~> 3.6

Providers

Name Version
cloudinit n/a
google ~> 5.39

Resources

Name Type
google_compute_address.boundary_worker_proxy_frontend_lb resource
google_compute_firewall.allow_9202 resource
google_compute_firewall.allow_iap resource
google_compute_firewall.allow_ssh resource
google_compute_firewall.health_checks resource
google_compute_forwarding_rule.boundary_worker_proxy_frontend_lb resource
google_compute_health_check.boundary_auto_healing resource
google_compute_instance_template.boundary resource
google_compute_region_backend_service.boundary_worker_proxy_backend_lb resource
google_compute_region_health_check.boundary_worker_proxy_backend_lb resource
google_compute_region_instance_group_manager.boundary resource
google_kms_crypto_key_iam_member.worker_operator resource
google_kms_crypto_key_iam_member.worker_viewer resource
google_service_account.boundary resource
google_service_account_key.boundary resource
cloudinit_config.boundary_cloudinit data source
google_client_config.default data source
google_compute_image.boundary data source
google_compute_network.vpc data source
google_compute_subnetwork.subnet data source
google_compute_zones.up data source
google_kms_crypto_key.key data source
google_kms_key_ring.key_ring data source

Inputs

Name Description Type Default Required
additional_package_names List of additional repository package names to install set(string) [] no
boundary_upstream List of FQDNs or IP addresses for the worker to connect to. list(string) null no
boundary_upstream_port Port for the worker to connect to. number 9201 no
boundary_version Version of Boundary to install. string "0.17.1+ent" no
cidr_ingress_9202_allow CIDR ranges to allow 9202 traffic inbound to Boundary instance(s). list(string) null no
cidr_ingress_ssh_allow CIDR ranges to allow SSH traffic inbound to Boundary instance(s) via IAP tunnel. list(string) null no
common_labels Common labels to apply to GCP resources. map(string) {} no
create_lb Boolean to create a Network Load Balancer for Boundary. Should be true if downstream workers will connect to these workers. bool false no
disk_size_gb Size in Gigabytes of root disk of Boundary instance(s). number 50 no
enable_iap (Optional bool) Enable https://cloud.google.com/iap/docs/using-tcp-forwarding#console, defaults to true. bool true no
enable_session_recording Boolean to enable session recording in Boundary. bool false no
friendly_name_prefix Friendly name prefix used for uniquely naming resources. This should be unique across all deployments string n/a yes
hcp_boundary_cluster_id ID of the Boundary cluster in HCP. Only used when using HCP Boundary. string null no
image_name VM image for Boundary instance(s). string "ubuntu-2404-noble-amd64-v20240607" no
image_project ID of project in which the resource belongs. string "ubuntu-os-cloud" no
initial_delay_sec The number of seconds that the managed instance group waits before it applies autohealing policies to new instances or recently recreated instances number 1200 no
instance_count Target size of Managed Instance Group for number of Boundary instances to run. Only specify a value greater than 1 if enable_active_active is set to true. number 1 no
key_name Name of Worker KMS key. string null no
key_ring_location Location of KMS key ring. If not set, the region of the Boundary deployment will be used. string null no
key_ring_name Name of KMS key ring. string null no
machine_type (Optional string) Size of machine to create. Default n2-standard-4 from https://cloud.google.com/compute/docs/machine-resource. string "n2-standard-4" no
project_id ID of GCP Project to create resources in. string n/a yes
region Region of GCP Project to create resources in. string n/a yes
subnet_name Existing VPC subnetwork for Boundary instance(s) and optionally Boundary frontend load balancer. string n/a yes
vpc Existing VPC network to deploy Boundary resources into. string n/a yes
vpc_project_id ID of GCP Project where the existing VPC resides if it is different than the default project. string null no
worker_is_internal Boolean to create give the worker an internal IP address only or give it an external IP address. bool true no
worker_tags Map of extra tags to apply to Boundary Worker Configuration. var.common_labels will be merged with this map. map(string) {} no

Outputs

Name Description
proxy_lb_ip_address IP Address of the Proxy Load Balancer.

About

A Terraform module for provisioning and installing Boundary Enterprise Worker on Google Compute Engine as described in HashiCorp Validated Designs

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published