Skip to content

A Terraform module for provisioning and installing Boundary Enterprise Worker on AWS EC2 as described in HashiCorp Validated Designs

License

Notifications You must be signed in to change notification settings

hashicorp/terraform-aws-boundary-enterprise-worker-hvd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Boundary Enterprise Worker HVD on AWS EC2

Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Boundary Enterprise Worker(s) on Amazon Web Services (AWS) using EC2 instances. This module is designed to work with the complimentary Boundary Enterprise Controller HVD on AWS EC2 module.

Prerequisites

General

  • Terraform CLI >= 1.9 installed on workstations.
  • Git CLI and Visual Studio Code editor installed on workstations are strongly recommended.
  • AWS account that Boundary will be hosted in with permissions to provision these resources via Terraform CLI.
  • (Optional) AWS S3 bucket for S3 Remote State backend that will solely be used to stand up the Boundary infrastructure via Terraform CLI (Community Edition).

Networking

  • AWS VPC ID and the following subnets:
    • EC2 (worker) subnet IDs.
    • (Optional) NLB Subnet IDs if a load balancer will be deployed.
  • (Optional) KMS VPC Endpoint configured within VPC.
  • Security Groups:
    • This module will create the necessary Security Groups and attach them to the applicable resources.
    • Ensure the Boundary Network connectivity are met.

Compute

One of the following mechanisms for shell access to Boundary EC2 instances:

  • EC2 SSH Key Pair.
  • Ability to enable AWS SSM (this module supports this via a boolean input variable).

Boundary

Unless deploying a Boundary HCP Worker, you will require a Boundary Enterprise Cluster deployed using the Boundary Enterprise Controller HVD on AWS EC2 module.

Usage - Boundary Enterprise

  1. Create/configure/validate the applicable prerequisites.

  2. Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to call and deploy this module. To get started, choose an example scenario. If you are not sure which example scenario to start with, then we recommend starting with the ingress example.

  3. Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your root Terraform configuration that will manage your Boundary deployment. If you are not sure where to create this new directory, it is common for us to see users create an environments/ directory at the root of this repo, and then a subdirectory for each Boundary instance deployment, like so:

    .
    └── environments
        β”œβ”€β”€ production
        β”‚Β Β  β”œβ”€β”€ backend.tf
        β”‚Β Β  β”œβ”€β”€ main.tf
        β”‚Β Β  β”œβ”€β”€ outputs.tf
        β”‚Β Β  β”œβ”€β”€ terraform.tfvars
        β”‚Β Β  └── variables.tf
        └── sandbox
            β”œβ”€β”€ backend.tf
            β”œβ”€β”€ main.tf
            β”œβ”€β”€ outputs.tf
            β”œβ”€β”€ terraform.tfvars
            └── variables.tf

    πŸ“ Note: in this example, the user will have two separate Boundary deployments; one for their sandbox environment, and one for their production environment. This is recommended, but not required.

  4. (Optional) Uncomment and update the S3 remote state backend configuration provided in the backend.tf file with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your Boundary deployment.

  5. Populate your own custom values into the terraform.tfvars.example file that was provided, and remove the .example file extension such that the file is now named terraform.tfvars.

    πŸ“ Note: The friendly_name_prefix variable should be unique for every agent deployment.

  6. Navigate to the directory of your newly created Terraform configuration for your Boundary Worker deployment, and run terraform init, terraform plan, and terraform apply.

  7. After the terraform apply finishes successfully, you can monitor the install progress by connecting to the VM in your Boundary worker ASG using SSH or AWS SSM and observing the cloud-init logs:

    Higher-level logs:

    tail -f /var/log/boundary-cloud-init.log

    Lower-level logs:

    journalctl -xu cloud-final -f

    πŸ“ Note: the -f argument is to follow the logs as they append in real-time, and is optional. You may remove the -f for a static view.

    The log files should display the following message after the cloud-init (user_data) script finishes successfully:

    [INFO] boundary_custom_data script finished successfully!
  8. Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:

    sudo systemctl status boundary
  9. After the Boundary Worker is deployed the Boundary worker should show up in the Boundary Clusters workers

Usage - HCP Boundary

  1. In HCP Boundary go to Workers and start creating a new worker. Copy the Boundary Cluster ID.

  2. Create/configure/validate the applicable prerequisites.

  3. Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to call and deploy this module. To get started, choose an example scenario. If you are not sure which example scenario to start with, then we recommend starting with the default example.

  4. Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your root Terraform configuration that will manage your Boundary deployment. If you are not sure where to create this new directory, it is common for us to see users create an environments/ directory at the root of this repo, and then a subdirectory for each Boundary instance deployment, like so:

      .
      └── environments
          β”œβ”€β”€ production
          β”‚Β Β  β”œβ”€β”€ backend.tf
          β”‚Β Β  β”œβ”€β”€ main.tf
          β”‚Β Β  β”œβ”€β”€ outputs.tf
          β”‚Β Β  β”œβ”€β”€ terraform.tfvars
          β”‚Β Β  └── variables.tf
          └── sandbox
              β”œβ”€β”€ backend.tf
              β”œβ”€β”€ main.tf
              β”œβ”€β”€ outputs.tf
              β”œβ”€β”€ terraform.tfvars
              └── variables.tf

    πŸ“ Note: in this example, the user will have two separate Boundary deployments; one for their sandbox environment, and one for their production environment. This is recommended, but not required.

  5. (Optional) Uncomment and update the S3 remote state backend configuration provided in the backend.tf file with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your Boundary deployment.

  6. Populate your own custom values into the terraform.tfvars.example file that was provided, and remove the .example file extension such that the file is now named terraform.tfvars. Ensure to set the hcp_boundary_cluster_id variable with the Boundary Cluster ID from step 1.

    πŸ“ Note: The friendly_name_prefix variable should be unique for every agent deployment.

  7. Navigate to the directory of your newly created Terraform configuration for your Boundary Worker deployment, and run terraform init, terraform plan, and terraform apply.

  8. After the terraform apply finishes successfully, you can monitor the install progress by connecting to the VM in your Boundary worker ASG using SSH or AWS SSM and observing the cloud-init logs:

    Higher-level logs:

    tail -f /var/log/boundary-cloud-init.log

    Lower-level logs:

    journalctl -xu cloud-final -f

    πŸ“ Note: the -f argument is to follow the logs as they append in real-time, and is optional. You may remove the -f for a static view.

    The log files should display the following message after the cloud-init (user_data) script finishes successfully:

    [INFO] boundary_custom_data script finished successfully!
  9. Once the cloud-init script finishes successfully, while still connected to the VM via SSH you can check the status of the boundary service:

    sudo systemctl status boundary
  10. While still connected to the Boundary Worker, sudo journalctl -xu boundary to review the Boundary Logs.

  11. Copy the Worker Auth Registration Request string and paste this into the Worker Auth Registration Request field of the new Boundary Worker in the HCP console and click Register Worker.

  12. Worker should show up in HCP Boundary console

Docs

Below are links to docs pages related to deployment customizations and day 2 operations of your Boundary Controller instance.

Module support

This open source software is maintained by the HashiCorp Technical Field Organization, independently of our enterprise products. While our Support Engineering team provides dedicated support for our enterprise offerings, this open source software is not included.

  • For help using this open source software, please engage your account team.
  • To report bugs/issues with this open source software, please open them directly against this code repository using the GitHub issues feature.

Please note that there is no official Service Level Agreement (SLA) for support of this software as a HashiCorp customer. This software falls under the definition of Community Software/Versions in your Agreement. We appreciate your understanding and collaboration in improving our open source projects.

Requirements

Name Version
terraform >= 1.9
aws >= 5.51.0

Providers

Name Version
aws >= 5.51.0

Resources

Name Type
aws_autoscaling_group.boundary resource
aws_iam_instance_profile.boundary_ec2 resource
aws_iam_role.boundary_ec2 resource
aws_iam_role_policy.boundary_ec2 resource
aws_iam_role_policy_attachment.aws_ssm resource
aws_launch_template.boundary resource
aws_lb.proxy resource
aws_lb_listener.proxy_lb_9202 resource
aws_lb_target_group.proxy_lb_9202 resource
aws_security_group.ec2_allow_egress resource
aws_security_group.ec2_allow_ingress resource
aws_security_group.proxy_lb_allow_egress resource
aws_security_group.proxy_lb_allow_ingress resource
aws_security_group_rule.ec2_allow_egress_all resource
aws_security_group_rule.ec2_allow_ingress_9202_cidr resource
aws_security_group_rule.ec2_allow_ingress_9202_from_lb resource
aws_security_group_rule.ec2_allow_ingress_9202_sg resource
aws_security_group_rule.ec2_allow_ingress_9203_from_lb resource
aws_security_group_rule.ec2_allow_ingress_ssh resource
aws_security_group_rule.proxy_lb_allow_egress_all resource
aws_security_group_rule.proxy_lb_allow_ingress_9202_cidr resource
aws_security_group_rule.proxy_lb_allow_ingress_9202_sg resource
aws_ami.amzn2 data source
aws_ami.centos data source
aws_ami.rhel data source
aws_ami.ubuntu data source
aws_availability_zones.available data source
aws_caller_identity.current data source
aws_iam_policy_document.assume_role_policy data source
aws_iam_policy_document.boundary_kms data source
aws_iam_policy_document.boundary_session_recording_kms data source
aws_iam_policy_document.combined data source
aws_iam_policy_document.ec2_allow_ebs_kms_cmk data source
aws_iam_role.boundary_ec2 data source
aws_kms_key.worker data source
aws_region.current data source

Inputs

Name Description Type Default Required
additional_package_names List of additional repository package names to install set(string) [] no
asg_health_check_grace_period The amount of time to wait for a new Boundary EC2 instance to become healthy. If this threshold is breached, the ASG will terminate the instance and launch a new one. number 300 no
asg_instance_count Desired number of Boundary EC2 instances to run in Autoscaling Group. Leave at 1 unless Active/Active is enabled. number 1 no
asg_max_size Max number of Boundary EC2 instances to run in Autoscaling Group. number 3 no
boundary_upstream List of IP addresses or FQDNs for the worker to initially connect to. This could be a controller or worker. This is not used when connecting to HCP Boundary. list(string) null no
boundary_upstream_port Port for the worker to connect to. Typically 9201 to connect to a controller, 9202 to a worker. number 9202 no
boundary_version Version of Boundary to install. string "0.17.1+ent" no
boundary_worker_iam_role_name Existing IAM Role to use for the Boundary Worker EC2 instances. This must be provided if create_boundary_worker_role is set to false. string null no
bsr_s3_bucket_arn Arn of the S3 bucket used to store Boundary session recordings. string null no
cidr_allow_ingress_boundary_9202 List of CIDR ranges to allow ingress traffic on port 9202 to workers. list(string) null no
cidr_allow_ingress_ec2_ssh List of CIDR ranges to allow SSH ingress to Boundary EC2 instance (i.e. bastion IP, client/workstation IP, etc.). list(string) [] no
common_tags Map of common tags for taggable AWS resources. map(string) {} no
create_boundary_worker_role Boolean to create an IAM role for Boundary Worker EC2 instances. bool true no
create_lb Boolean to create a Network Load Balancer for Boundary. Should be true if downstream workers will connect to these workers. bool false no
ebs_iops The amount of IOPS to provision for a gp3 volume. Must be at least 3000. number 3000 no
ebs_is_encrypted Boolean for encrypting the root block device of the Boundary EC2 instance(s). bool false no
ebs_kms_key_arn ARN of KMS key to encrypt EC2 EBS volumes. string null no
ebs_throughput The throughput to provision for a gp3 volume in MB/s. Must be at least 125 MB/s. number 125 no
ebs_volume_size The size (GB) of the root EBS volume for Boundary EC2 instances. Must be at least 50 GB. number 50 no
ebs_volume_type EBS volume type for Boundary EC2 instances. string "gp3" no
ec2_allow_ssm Boolean to attach the AmazonSSMManagedInstanceCore policy to the Boundary instance role, allowing the SSM agent (if present) to function. bool false no
ec2_ami_id Custom AMI ID for Boundary EC2 Launch Template. If specified, value of os_distro must coincide with this custom AMI OS distro. string null no
ec2_instance_size EC2 instance type for Boundary EC2 Launch Template. Regions may have different instance types available. string "m5.2xlarge" no
ec2_os_distro Linux OS distribution for Boundary EC2 instance. Choose from amzn2, ubuntu, rhel, centos. string "ubuntu" no
ec2_ssh_key_pair Name of existing SSH key pair to attach to Boundary EC2 instance. string "" no
enable_session_recording Boolean to enable session recording. bool false no
friendly_name_prefix Friendly name prefix used for uniquely naming AWS resources. This should be unique across all deployments string n/a yes
hcp_boundary_cluster_id ID of the Boundary cluster in HCP. Only used when using HCP Boundary. string "" no
kms_endpoint AWS VPC endpoint for KMS service. string "" no
kms_worker_arn KMS ID of the worker-auth kms key. string "" no
lb_is_internal Boolean to create an internal (private) Proxy load balancer. The lb_subnet_ids must be private subnets if this is set to true. bool true no
lb_subnet_ids List of subnet IDs to use for the proxy Network Load Balancer. Unless the lb needs to be publicly exposed (example: downstream Boundary Workers connecting to the ingress workers over the Internet), use private subnets. list(string) null no
sg_allow_ingress_boundary_9202 List of Security Groups to allow ingress traffic on port 9202 to workers. list(string) [] no
vpc_id ID of VPC where Boundary will be deployed. string n/a yes
worker_is_internal Boolean to create give the worker an internal IP address only or give it an external IP address. bool true no
worker_subnet_ids List of subnet IDs to use for the EC2 instance. Unless the workers need to be publicly exposed (example: ingress workers), use private subnets. list(string) n/a yes
worker_tags Map of extra tags to apply to Boundary Worker Configuration. var.common_tags will be merged with this map. map(string) {} no

Outputs

Name Description
boundary_worker_iam_role_name Name of the IAM role for Boundary Worker instances.
proxy_lb_dns_name DNS name of the Load Balancer.

About

A Terraform module for provisioning and installing Boundary Enterprise Worker on AWS EC2 as described in HashiCorp Validated Designs

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published