Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Boundary Enterprise Worker(s) on Amazon Web Services (AWS) using EC2 instances. This module is designed to work with the complimentary Boundary Enterprise Controller HVD on AWS EC2 module.
- Terraform CLI
>= 1.9installed on workstations. GitCLI and Visual Studio Code editor installed on workstations are strongly recommended.- AWS account that Boundary will be hosted in with permissions to provision these resources via Terraform CLI.
- (Optional) AWS S3 bucket for S3 Remote State backend that will solely be used to stand up the Boundary infrastructure via Terraform CLI (Community Edition).
- AWS VPC ID and the following subnets:
- EC2 (worker) subnet IDs.
- (Optional) NLB Subnet IDs if a load balancer will be deployed.
- (Optional) KMS VPC Endpoint configured within VPC.
- Security Groups:
- This module will create the necessary Security Groups and attach them to the applicable resources.
- Ensure the Boundary Network connectivity are met.
One of the following mechanisms for shell access to Boundary EC2 instances:
- EC2 SSH Key Pair.
- Ability to enable AWS SSM (this module supports this via a boolean input variable).
Unless deploying a Boundary HCP Worker, you will require a Boundary Enterprise Cluster deployed using the Boundary Enterprise Controller HVD on AWS EC2 module.
-
Create/configure/validate the applicable prerequisites.
-
Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to call and deploy this module. To get started, choose an example scenario. If you are not sure which example scenario to start with, then we recommend starting with the ingress example.
-
Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your root Terraform configuration that will manage your Boundary deployment. If you are not sure where to create this new directory, it is common for us to see users create an
environments/directory at the root of this repo, and then a subdirectory for each Boundary instance deployment, like so:. βββ environments βββ production βΒ Β βββ backend.tf βΒ Β βββ main.tf βΒ Β βββ outputs.tf βΒ Β βββ terraform.tfvars βΒ Β βββ variables.tf βββ sandbox βββ backend.tf βββ main.tf βββ outputs.tf βββ terraform.tfvars βββ variables.tfπ Note: in this example, the user will have two separate Boundary deployments; one for their
sandboxenvironment, and one for theirproductionenvironment. This is recommended, but not required. -
(Optional) Uncomment and update the S3 remote state backend configuration provided in the
backend.tffile with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your Boundary deployment. -
Populate your own custom values into the
terraform.tfvars.examplefile that was provided, and remove the.examplefile extension such that the file is now namedterraform.tfvars.π Note: The
friendly_name_prefixvariable should be unique for every agent deployment. -
Navigate to the directory of your newly created Terraform configuration for your Boundary Worker deployment, and run
terraform init,terraform plan, andterraform apply. -
After the
terraform applyfinishes successfully, you can monitor the install progress by connecting to the VM in your Boundary worker ASG using SSH or AWS SSM and observing the cloud-init logs:Higher-level logs:
tail -f /var/log/boundary-cloud-init.log
Lower-level logs:
journalctl -xu cloud-final -f
π Note: the
-fargument is to follow the logs as they append in real-time, and is optional. You may remove the-ffor a static view.The log files should display the following message after the cloud-init (user_data) script finishes successfully:
[INFO] boundary_custom_data script finished successfully! -
Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:
sudo systemctl status boundary
-
After the Boundary Worker is deployed the Boundary worker should show up in the Boundary Clusters workers
-
In HCP Boundary go to
Workersand start creating a new worker. Copy theBoundary Cluster ID. -
Create/configure/validate the applicable prerequisites.
-
Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to call and deploy this module. To get started, choose an example scenario. If you are not sure which example scenario to start with, then we recommend starting with the default example.
-
Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your root Terraform configuration that will manage your Boundary deployment. If you are not sure where to create this new directory, it is common for us to see users create an
environments/directory at the root of this repo, and then a subdirectory for each Boundary instance deployment, like so:. βββ environments βββ production βΒ Β βββ backend.tf βΒ Β βββ main.tf βΒ Β βββ outputs.tf βΒ Β βββ terraform.tfvars βΒ Β βββ variables.tf βββ sandbox βββ backend.tf βββ main.tf βββ outputs.tf βββ terraform.tfvars βββ variables.tfπ Note: in this example, the user will have two separate Boundary deployments; one for their
sandboxenvironment, and one for theirproductionenvironment. This is recommended, but not required. -
(Optional) Uncomment and update the S3 remote state backend configuration provided in the
backend.tffile with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your Boundary deployment. -
Populate your own custom values into the
terraform.tfvars.examplefile that was provided, and remove the.examplefile extension such that the file is now namedterraform.tfvars. Ensure to set thehcp_boundary_cluster_idvariable with the Boundary Cluster ID from step 1.π Note: The
friendly_name_prefixvariable should be unique for every agent deployment. -
Navigate to the directory of your newly created Terraform configuration for your Boundary Worker deployment, and run
terraform init,terraform plan, andterraform apply. -
After the
terraform applyfinishes successfully, you can monitor the install progress by connecting to the VM in your Boundary worker ASG using SSH or AWS SSM and observing the cloud-init logs:Higher-level logs:
tail -f /var/log/boundary-cloud-init.log
Lower-level logs:
journalctl -xu cloud-final -f
π Note: the
-fargument is to follow the logs as they append in real-time, and is optional. You may remove the-ffor a static view.The log files should display the following message after the cloud-init (user_data) script finishes successfully:
[INFO] boundary_custom_data script finished successfully! -
Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:
sudo systemctl status boundary
-
While still connected to the Boundary Worker,
sudo journalctl -xu boundaryto review the Boundary Logs. -
Copy the
Worker Auth Registration Requeststring and paste this into theWorker Auth Registration Requestfield of the new Boundary Worker in the HCP console and clickRegister Worker. -
Worker should show up in HCP Boundary console
Below are links to docs pages related to deployment customizations and day 2 operations of your Boundary Controller instance.
- Deployment Customizations
- Upgrading Boundary version
- Updating/modifying Boundary configuration settings
This open source software is maintained by the HashiCorp Technical Field Organization, independently of our enterprise products. While our Support Engineering team provides dedicated support for our enterprise offerings, this open source software is not included.
- For help using this open source software, please engage your account team.
- To report bugs/issues with this open source software, please open them directly against this code repository using the GitHub issues feature.
Please note that there is no official Service Level Agreement (SLA) for support of this software as a HashiCorp customer. This software falls under the definition of Community Software/Versions in your Agreement. We appreciate your understanding and collaboration in improving our open source projects.
| Name | Version |
|---|---|
| terraform | >= 1.9 |
| aws | >= 5.51.0 |
| Name | Version |
|---|---|
| aws | >= 5.51.0 |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| additional_package_names | List of additional repository package names to install | set(string) |
[] |
no |
| asg_health_check_grace_period | The amount of time to wait for a new Boundary EC2 instance to become healthy. If this threshold is breached, the ASG will terminate the instance and launch a new one. | number |
300 |
no |
| asg_instance_count | Desired number of Boundary EC2 instances to run in Autoscaling Group. Leave at 1 unless Active/Active is enabled. |
number |
1 |
no |
| asg_max_size | Max number of Boundary EC2 instances to run in Autoscaling Group. | number |
3 |
no |
| boundary_upstream | List of IP addresses or FQDNs for the worker to initially connect to. This could be a controller or worker. This is not used when connecting to HCP Boundary. | list(string) |
null |
no |
| boundary_upstream_port | Port for the worker to connect to. Typically 9201 to connect to a controller, 9202 to a worker. | number |
9202 |
no |
| boundary_version | Version of Boundary to install. | string |
"0.17.1+ent" |
no |
| boundary_worker_iam_role_name | Existing IAM Role to use for the Boundary Worker EC2 instances. This must be provided if create_boundary_worker_role is set to false. |
string |
null |
no |
| bsr_s3_bucket_arn | Arn of the S3 bucket used to store Boundary session recordings. | string |
null |
no |
| cidr_allow_ingress_boundary_9202 | List of CIDR ranges to allow ingress traffic on port 9202 to workers. | list(string) |
null |
no |
| cidr_allow_ingress_ec2_ssh | List of CIDR ranges to allow SSH ingress to Boundary EC2 instance (i.e. bastion IP, client/workstation IP, etc.). | list(string) |
[] |
no |
| common_tags | Map of common tags for taggable AWS resources. | map(string) |
{} |
no |
| create_boundary_worker_role | Boolean to create an IAM role for Boundary Worker EC2 instances. | bool |
true |
no |
| create_lb | Boolean to create a Network Load Balancer for Boundary. Should be true if downstream workers will connect to these workers. | bool |
false |
no |
| ebs_iops | The amount of IOPS to provision for a gp3 volume. Must be at least 3000. |
number |
3000 |
no |
| ebs_is_encrypted | Boolean for encrypting the root block device of the Boundary EC2 instance(s). | bool |
false |
no |
| ebs_kms_key_arn | ARN of KMS key to encrypt EC2 EBS volumes. | string |
null |
no |
| ebs_throughput | The throughput to provision for a gp3 volume in MB/s. Must be at least 125 MB/s. |
number |
125 |
no |
| ebs_volume_size | The size (GB) of the root EBS volume for Boundary EC2 instances. Must be at least 50 GB. |
number |
50 |
no |
| ebs_volume_type | EBS volume type for Boundary EC2 instances. | string |
"gp3" |
no |
| ec2_allow_ssm | Boolean to attach the AmazonSSMManagedInstanceCore policy to the Boundary instance role, allowing the SSM agent (if present) to function. |
bool |
false |
no |
| ec2_ami_id | Custom AMI ID for Boundary EC2 Launch Template. If specified, value of os_distro must coincide with this custom AMI OS distro. |
string |
null |
no |
| ec2_instance_size | EC2 instance type for Boundary EC2 Launch Template. Regions may have different instance types available. | string |
"m5.2xlarge" |
no |
| ec2_os_distro | Linux OS distribution for Boundary EC2 instance. Choose from amzn2, ubuntu, rhel, centos. |
string |
"ubuntu" |
no |
| ec2_ssh_key_pair | Name of existing SSH key pair to attach to Boundary EC2 instance. | string |
"" |
no |
| enable_session_recording | Boolean to enable session recording. | bool |
false |
no |
| friendly_name_prefix | Friendly name prefix used for uniquely naming AWS resources. This should be unique across all deployments | string |
n/a | yes |
| hcp_boundary_cluster_id | ID of the Boundary cluster in HCP. Only used when using HCP Boundary. | string |
"" |
no |
| kms_endpoint | AWS VPC endpoint for KMS service. | string |
"" |
no |
| kms_worker_arn | KMS ID of the worker-auth kms key. | string |
"" |
no |
| lb_is_internal | Boolean to create an internal (private) Proxy load balancer. The lb_subnet_ids must be private subnets if this is set to true. |
bool |
true |
no |
| lb_subnet_ids | List of subnet IDs to use for the proxy Network Load Balancer. Unless the lb needs to be publicly exposed (example: downstream Boundary Workers connecting to the ingress workers over the Internet), use private subnets. | list(string) |
null |
no |
| sg_allow_ingress_boundary_9202 | List of Security Groups to allow ingress traffic on port 9202 to workers. | list(string) |
[] |
no |
| vpc_id | ID of VPC where Boundary will be deployed. | string |
n/a | yes |
| worker_is_internal | Boolean to create give the worker an internal IP address only or give it an external IP address. | bool |
true |
no |
| worker_subnet_ids | List of subnet IDs to use for the EC2 instance. Unless the workers need to be publicly exposed (example: ingress workers), use private subnets. | list(string) |
n/a | yes |
| worker_tags | Map of extra tags to apply to Boundary Worker Configuration. var.common_tags will be merged with this map. | map(string) |
{} |
no |
| Name | Description |
|---|---|
| boundary_worker_iam_role_name | Name of the IAM role for Boundary Worker instances. |
| proxy_lb_dns_name | DNS name of the Load Balancer. |