Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Terraform Enterprise on Google Kubernetes Engine (GKE). This module supports bringing your own GKE cluster, or optionally creating a new GKE cluster dedicated to running TFE. This module does not use the Kubernetes or Helm Terraform providers, but rather includes Post Steps for the application layer portion of the deployment leveraging the kubectl and helm CLIs.
- TFE license file (e.g.
terraform.hclic) - Terraform CLI (version
>= 1.9) installed on workstation - General understanding of how to use Terraform (Community Edition)
- General understaning of how to use Google Cloud Platform (GCP)
- General understanding of how to use Kubernetes and Helm
gcloudCLI installed on workstationkubectlCLI andhelmCLI installed on workstationgitCLI and Visual Studio Code code editor installed on worksation are strongly recommended- GCP project that TFE will be deployed in with permissions to provision these resources via Terraform CLI
- (Optional) GCS bucket for GCS remote state backend that will be used to manage the Terraform state of this TFE deployment (out-of-band from the TFE application) via Terraform CLI (Community Edition)
- VPC network that TFE will be deployed in
- Private Service Access (PSA) configured in VPC to enable private connectivity from GKE cluster/TFE pods to Cloud SQL for PostgreSQL and Memorystore for Redis
- Subnet for GKE cluster (if
create_gke_clusteristrue). It is highly recommended that the subnet has Private Google Access enabled for private connectivity from GKE cluster to Google Cloud Storage. - Static IP address for TFE load balancer (whether to be associated with a Kubernetes service or ingress controller load balancer)
- Chosen fully qualified domain name (FQDN) for TFE (e.g.
tfe.gcp.example.com)
- Allow
TCP:443ingress to TFE load balancer subnet from CIDR ranges of TFE users/clients, VCS, and any other systems that needs to access TFE - Allow
TCP:443ingress to GKE/TFE pods subnet from source IP ranges listed here for GCP load balancer health check probes - (Optional) Allow
TCP:9091(HTTPS) andTCP:9090(HTTP) ingress to GKE/TFE pods subnet from CIDR ranges of your monitoring/observability tool (for scraping TFE metrics endpoints) - Allow
TCP:8443(HTTPS) andTCP:8080(HTTP) ingress to GKE/TFE pods subnet from TFE load balancer subnet (for TFE application traffic) - Allow
TCP:5432ingress to database subnet from GKE/TFE pods subnet (for PostgreSQL traffic) - Allow
TCP:6379ingress to Redis subnet from GKE/TFE pods subnet (for Redis TLS traffic) - Allow
TCP:8201between nodes on GKE/TFE pods subnet (for TFE embedded Vault internal cluster traffic) - Allow
TCP:443egress to Terraform endpoints listed here from GKE/TFE pods subnet - If your GKE cluster is private, your client/workstation must be able to access the control plane via
kubectlandhelm - Be familiar with the TFE ingress requirements
- Be familiar with the TFE egress requirements
- TLS certificate (e.g.
cert.pem) and private key (e.g.privkey.pem) that matches your chosen fully qualified domain name (FQDN) for TFE- TLS certificate and private key must be in PEM format
- Private key must not be password protected
- TLS certificate authority (CA) bundle (e.g.
ca_bundle.pem) corresponding with the CA that issues your TFE TLS certificates- CA bundle must be in PEM format
- You may include additional certificate chains corresponding to external systems that TFE will make outbound connections to (e.g. your self-hosted VCS, if its certificate was issued by a different CA than your TFE certificate)
GCP Secret Manager secrets:
- PostgreSQL database password secret
If you plan to create a new GKE cluster using this module, then you may skip this section. Otherwise:
- GKE cluster
-
Create/configure/validate the applicable prerequisites.
-
Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to call and deploy this module. To get started, choose an example scenario. If you are starting without an existing GKE cluster, then you should select the new-gke example scenario.
-
Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your Terraform configuration that will manage your TFE deployment. If you are not sure where to create this new directory, it is common for users to create an
environments/directory at the root of this repo (once you have cloned it down locally), and then a subdirectory for each TFE instance deployment, like so:. └── environments ├── production │  ├── backend.tf │  ├── main.tf │  ├── outputs.tf │  ├── terraform.tfvars │  └── variables.tf └── sandbox ├── backend.tf ├── main.tf ├── outputs.tf ├── terraform.tfvars └── variables.tf📝 Note: In this example, the user will have two separate TFE deployments; one for their
sandboxenvironment, and one for theirproductionenvironment. This is recommended, but not required. -
(Optional) Uncomment and update the GCS remote state backend configuration provided in the
backend.tffile with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your TFE deployment (if you are in a sandbox environment, for example). -
Populate your own custom values into the
terraform.tfvars.examplefile that was provided (in particular, values enclosed in the<>characters). Then, remove the.examplefile extension such that the file is now namedterraform.tfvars. -
Navigate to the directory of your newly created Terraform configuration for your TFE deployment, and run
terraform init,terraform plan, andterraform apply.
The TFE infrastructure resources have now been created. Next comes the application layer portion of the deployment (which we refer to as the Post Steps), which will involve interacting with your GKE cluster via kubectl and installing the TFE application via helm.
-
Authenticate to your GKE cluster:
gcloud auth login gcloud config set project <PROJECT_ID> gcloud container clusters get-credentials <GKE_CLUSTER_NAME> --region <REGION>
-
Create the Kubernetes namespace for TFE:
kubectl create namespace tfe
📝 Note: You can name it something different than
tfeif you prefer. If you do name it differently, be sure to update your value of thetfe_kube_namespaceandtfe_kube_svc_accountinput variables accordingly (the Helm chart will automatically create a Kubernetes service account for TFE based on the name of the namespace). -
Create the required secrets for your TFE deployment within your new Kubernetes namespace for TFE. There are several ways to do this, whether it be from the CLI via
kubectl, or another method involving a third-party secrets helper/tool. See the Kubernetes-Secrets doc for details on the required secrets and how to create them. -
This Terraform module will automatically generate a Helm overrides file within your Terraform working directory named
./helm/module_generated_helm_overrides.yaml. This Helm overrides file contains values interpolated from some of the infrastructure resources that were created by Terraform in step 6. Within the Helm overrides file, update or validate the values for the remaining settings that are enclosed in the<>characters. You may also add any additional configuration settings into your Helm overrides file at this time (see the Helm-Overrides doc for more details). -
Now that you have customized your
module_generated_helm_overrides.yamlfile, rename it to something more applicable to your deployment, such asprod_tfe_overrides.yaml(or whatever you prefer). Then, within yourterraform.tfvarsfile, set the value ofcreate_helm_overrides_filetofalse, as we no longer want the Terraform module to manage this file or generate a new one on a subsequent Terraform run. -
Add the HashiCorp Helm registry:
helm repo add hashicorp https://helm.releases.hashicorp.com
📝 Note: If you have already added the
hashicorpHelm repository, you should runhelm repo update hashicorpto ensure that you have the latest version. -
Install the TFE application via
helm:helm install terraform-enterprise hashicorp/terraform-enterprise --namespace <TFE_NAMESPACE> --values <TFE_OVERRIDES_FILE>
-
Verify the TFE pod(s) are starting successfully:
View the events within the namespace:
kubectl get events --namespace <TFE_NAMESPACE>
View the pod(s) within the namespace:
kubectl get pods --namespace <TFE_NAMESPACE>
View the logs from the pod:
kubectl logs <TFE_POD_NAME> --namespace <TFE_NAMESPACE> -f
-
If you did not create a DNS record during your Terraform deployment in the previous section (via the boolean input
create_tfe_cloud_dns_record), then create a DNS record for your TFE FQDN that resolves to your TFE load balancer, depending on how the load balancer was configured during your TFE deployment:-
If you are using a Kubernetes service of type
LoadBalancer(what the module-generated Helm overrides defaults to), the DNS record should resolve to the static IP address of your TFE load balancer:kubectl get services --namespace <TFE_NAMESPACE>
-
If you are using a custom Kubernetes ingress (meaning you customized your Helm overrides in step 10), the DNS record should resolve to the IP address of your ingress controller load balancer:
kubectl get ingress <INGRESS_NAME> --namespace <INGRESS_NAMESPACE>
-
-
Verify the TFE application is ready:
curl https://<TFE_FQDN>/_health_check
-
Follow the remaining steps here to finish the installation setup, which involves creating the initial admin user.
Below are links to various docs related to the customization and management of your TFE deployment:
- Deployment Customizations
- Helm Overrides
- TFE Version Upgrades
- TFE TLS Certificate Rotation
- TFE Configuration Settings
- TFE Kubernetes Secrets
| Name | Version |
|---|---|
| terraform | >= 1.9 |
| ~> 5.42 | |
| google-beta | ~> 5.42 |
| local | >= 2.5.1 |
| random | >= 3.6.2 |
| Name | Version |
|---|---|
| ~> 5.42 | |
| google-beta | ~> 5.42 |
| local | >= 2.5.1 |
| random | >= 3.6.2 |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| friendly_name_prefix | Prefix used to name all GCP resources uniquely. It is most common to use either an environment (e.g. 'sandbox', 'prod'), a team name, or a project name here. | string |
n/a | yes |
| project_id | ID of GCP project to deploy TFE in. | string |
n/a | yes |
| tfe_database_password_secret_version | Name of PostgreSQL database password secret to retrieve from GCP Secret Manager. | string |
n/a | yes |
| tfe_fqdn | Fully qualified domain name of TFE instance. This name should eventually resolve to the TFE load balancer DNS name or IP address and will be what clients use to access TFE. | string |
n/a | yes |
| vpc_name | Name of existing VPC network to create resources in. | string |
n/a | yes |
| cloud_dns_zone_name | Name of Google Cloud DNS managed zone to create TFE DNS record in. Only valid when create_cloud_dns_record is true. |
string |
null |
no |
| common_labels | Common labels to apply to all GCP resources. | map(string) |
{} |
no |
| create_gke_cluster | Boolean to create a GKE cluster. | bool |
false |
no |
| create_helm_overrides_file | Boolean to generate a YAML file from template with Helm overrides values for your TFE deployment. Set this to false after your initial TFE deployment is complete, as we no longer want the Terraform module to manage it (since you will be customizing it further). |
bool |
true |
no |
| create_tfe_cloud_dns_record | Boolean to create Google Cloud DNS record for TFE using the value of tfe_fqdn for the record name. |
bool |
false |
no |
| create_tfe_lb_ip | Boolean to create a static IP address for TFE load balancer (load balancer is created/managed by Helm/Kubernetes). | bool |
true |
no |
| enable_gke_workload_identity | Boolean to enable GCP workload identity with GKE cluster. | bool |
true |
no |
| gcs_force_destroy | Boolean indicating whether to allow force destroying the TFE GCS bucket. GCS bucket can be destroyed if it is not empty when true. |
bool |
false |
no |
| gcs_kms_cmek_name | Name of Cloud KMS customer managed encryption key (CMEK) to use for TFE GCS bucket encryption. | string |
null |
no |
| gcs_kms_keyring_name | Name of Cloud KMS key ring that contains KMS customer managed encryption key (CMEK) to use for TFE GCS bucket encryption. Geographic location (region) of the key ring must match the location of the TFE GCS bucket. | string |
null |
no |
| gcs_location | Location of TFE GCS bucket to create. | string |
"US" |
no |
| gcs_storage_class | Storage class of TFE GCS bucket. | string |
"MULTI_REGIONAL" |
no |
| gcs_uniform_bucket_level_access | Boolean to enable uniform bucket level access on TFE GCS bucket. | bool |
true |
no |
| gcs_versioning_enabled | Boolean to enable versioning on TFE GCS bucket. | bool |
true |
no |
| gke_cluster_is_private | Boolean indicating if GKE network access is private cluster. | bool |
true |
no |
| gke_cluster_name | Name of GKE cluster to create. | string |
"tfe-gke-cluster" |
no |
| gke_control_plane_authorized_cidr | CIDR block allowed to access GKE control plane. | string |
null |
no |
| gke_control_plane_cidr | Control plane IP range of private GKE cluster. Must not overlap with any subnet in GKE cluster's VPC. | string |
"10.0.10.0/28" |
no |
| gke_deletion_protection | Boolean to enable deletion protection on GKE cluster. | bool |
false |
no |
| gke_enable_private_endpoint | Boolean to enable private endpoint on GKE cluster. | bool |
true |
no |
| gke_http_load_balancing_disabled | Boolean to enable HTTP load balancing on GKE cluster. | bool |
false |
no |
| gke_l4_ilb_subsetting_enabled | Boolean to enable layer 4 ILB subsetting on GKE cluster. | bool |
true |
no |
| gke_node_count | Number of GKE nodes per zone | number |
1 |
no |
| gke_node_pool_name | Name of node pool to create in GKE cluster. | string |
"tfe-gke-node-pool" |
no |
| gke_node_type | Size/machine type of GKE nodes. | string |
"e2-standard-4" |
no |
| gke_release_channel | The channel to use for how frequent Kubernetes updates and features are received. | string |
"REGULAR" |
no |
| gke_remove_default_node_pool | Boolean to remove the default node pool in GKE cluster. | bool |
true |
no |
| gke_subnet_name | Name or self_link to existing VPC subnetwork to create GKE cluster in. | string |
null |
no |
| postgres_availability_type | Availability type of Cloud SQL for PostgreSQL instance. | string |
"REGIONAL" |
no |
| postgres_backup_start_time | HH:MM time format indicating when daily automatic backups of Cloud SQL for PostgreSQL should run. Defaults to 12 AM (midnight) UTC. | string |
"00:00" |
no |
| postgres_disk_size | Size in GB of PostgreSQL disk. | number |
50 |
no |
| postgres_insights_config | Configuration settings for Cloud SQL for PostgreSQL insights. | object({ |
{ |
no |
| postgres_kms_cmek_name | Name of Cloud KMS customer managed encryption key (CMEK) to use for Cloud SQL for PostgreSQL database instance. | string |
null |
no |
| postgres_kms_keyring_name | Name of Cloud KMS Key Ring that contains KMS key to use for Cloud SQL for PostgreSQL. Geographic location (region) of key ring must match the location of the TFE Cloud SQL for PostgreSQL database instance. | string |
null |
no |
| postgres_machine_type | Machine size of Cloud SQL for PostgreSQL instance. | string |
"db-custom-4-16384" |
no |
| postgres_maintenance_window | Optional maintenance window settings for the Cloud SQL for PostgreSQL instance. | object({ |
{ |
no |
| postgres_ssl_mode | Indicates whether to enforce TLS/SSL connections to the Cloud SQL for PostgreSQL instance. | string |
"ENCRYPTED_ONLY" |
no |
| postgres_version | PostgreSQL version to use. | string |
"POSTGRES_16" |
no |
| redis_auth_enabled | Boolean to enable authentication on Redis instance. | bool |
true |
no |
| redis_connect_mode | Network connection mode for Redis instance. | string |
"PRIVATE_SERVICE_ACCESS" |
no |
| redis_kms_cmek_name | Name of Cloud KMS customer managed encryption key (CMEK) to use for TFE Redis instance. | string |
null |
no |
| redis_kms_keyring_name | Name of Cloud KMS key ring that contains KMS customer managed encryption key (CMEK) to use for TFE Redis instance. Geographic location (region) of key ring must match the location of the TFE Redis instance. | string |
null |
no |
| redis_memory_size_gb | The size of the Redis instance in GiB. | number |
6 |
no |
| redis_tier | The service tier of the Redis instance. Defaults to STANDARD_HA for high availability. |
string |
"STANDARD_HA" |
no |
| redis_transit_encryption_mode | Determines transit encryption (TLS) mode for Redis instance. | string |
"DISABLED" |
no |
| redis_version | The version of Redis software. | string |
"REDIS_7_2" |
no |
| tfe_cloud_dns_record_ip_address | IP address of DNS record for TFE. Only valid when create_cloud_dns_record is true and create_tfe_lb_ip is false. |
string |
null |
no |
| tfe_database_name | Name of TFE PostgreSQL database to create. | string |
"tfe" |
no |
| tfe_database_parameters | Additional parameters to pass into the TFE database settings for the PostgreSQL connection URI. | string |
"sslmode=require" |
no |
| tfe_database_user | Name of TFE PostgreSQL database user to create. | string |
"tfe" |
no |
| tfe_http_port | HTTP port number that the TFE application will listen on within the TFE pods. It is recommended to leave this as the default value. | number |
8080 |
no |
| tfe_https_port | HTTPS port number that the TFE application will listen on within the TFE pods. It is recommended to leave this as the default value. | number |
8443 |
no |
| tfe_kube_namespace | Name of Kubernetes namespace for TFE (created by Helm chart). Used to configure GCP workload identity with GKE. | string |
"tfe" |
no |
| tfe_kube_svc_account | Name of Kubernetes Service Account for TFE (created by Helm chart). Used to configure GCP workload identity with GKE. | string |
"tfe" |
no |
| tfe_lb_ip_address | IP address to assign to TFE load balancer. Must be a valid IP address from tfe_lb_subnet_name when tfe_lb_ip_address_type is INTERNAL. |
string |
null |
no |
| tfe_lb_ip_address_type | Type of IP address to assign to TFE load balancer. Valid values are 'INTERNAL' or 'EXTERNAL'. | string |
"INTERNAL" |
no |
| tfe_lb_subnet_name | Name or self_link to existing VPC subnetwork to create TFE internal load balancer IP address in. | string |
null |
no |
| tfe_metrics_http_port | HTTP port number that the TFE metrics endpoint will listen on within the TFE pods. It is recommended to leave this as the default value. | number |
9090 |
no |
| tfe_metrics_https_port | HTTPS port number that the TFE metrics endpoint will listen on within the TFE pods. It is recommended to leave this as the default value. | number |
9091 |
no |
| vpc_project_id | ID of GCP Project where the existing VPC resides if it is different than the default project. | string |
null |
no |
| Name | Description |
|---|---|
| gke_cluster_name | Name of TFE GKE cluster. |
| redis_server_ca_certs | CA certificate of TFE Redis instance. Add this to your TFE CA bundle. |
| tfe_database_host | IP address and port of TFE Cloud SQL for PostgreSQL database instance. |
| tfe_database_instance_id | ID of TFE Cloud SQL for PostgreSQL database instance. |
| tfe_database_password | TFE PostgreSQL database password. |
| tfe_database_password_base64 | Base64-encoded TFE PostgreSQL database password. |
| tfe_lb_ip_address | IP address of TFE load balancer. |
| tfe_lb_ip_address_name | Name of IP address resource of TFE load balancer. |
| tfe_object_storage_google_bucket | Name of TFE GCS bucket. |
| tfe_redis_host | Hostname/IP address (and port if non-default) of TFE Redis instance. |
| tfe_redis_password | Auth string of TFE Redis instance. |
| tfe_redis_password_base64 | Base64-encoded auth string of TFE Redis instance. |
| tfe_service_account_email | TFE GCP service account email address. Only produced when enable_gke_workload_identity is true. |
| tfe_service_account_key | TFE GCP service account key in JSON format, base64-encoded. Only produced when enable_gke_workload_identity is false. |