Skip to content

A Terraform module for provisioning and installing Terraform Enterprise on Google GKE as described in HashiCorp Validated Designs

License

Notifications You must be signed in to change notification settings

hashicorp/terraform-google-terraform-enterprise-gke-hvd

Terraform Enterprise HVD on GCP GKE

Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Terraform Enterprise on Google Kubernetes Engine (GKE). This module supports bringing your own GKE cluster, or optionally creating a new GKE cluster dedicated to running TFE. This module does not use the Kubernetes or Helm Terraform providers, but rather includes Post Steps for the application layer portion of the deployment leveraging the kubectl and helm CLIs.

Prerequisites

General

  • TFE license file (e.g. terraform.hclic)
  • Terraform CLI (version >= 1.9) installed on workstation
  • General understanding of how to use Terraform (Community Edition)
  • General understaning of how to use Google Cloud Platform (GCP)
  • General understanding of how to use Kubernetes and Helm
  • gcloud CLI installed on workstation
  • kubectl CLI and helm CLI installed on workstation
  • git CLI and Visual Studio Code code editor installed on worksation are strongly recommended
  • GCP project that TFE will be deployed in with permissions to provision these resources via Terraform CLI
  • (Optional) GCS bucket for GCS remote state backend that will be used to manage the Terraform state of this TFE deployment (out-of-band from the TFE application) via Terraform CLI (Community Edition)

Networking

  • VPC network that TFE will be deployed in
  • Private Service Access (PSA) configured in VPC to enable private connectivity from GKE cluster/TFE pods to Cloud SQL for PostgreSQL and Memorystore for Redis
  • Subnet for GKE cluster (if create_gke_cluster is true). It is highly recommended that the subnet has Private Google Access enabled for private connectivity from GKE cluster to Google Cloud Storage.
  • Static IP address for TFE load balancer (whether to be associated with a Kubernetes service or ingress controller load balancer)
  • Chosen fully qualified domain name (FQDN) for TFE (e.g. tfe.gcp.example.com)

Firewall rules

  • Allow TCP:443 ingress to TFE load balancer subnet from CIDR ranges of TFE users/clients, VCS, and any other systems that needs to access TFE
  • Allow TCP:443 ingress to GKE/TFE pods subnet from source IP ranges listed here for GCP load balancer health check probes
  • (Optional) Allow TCP:9091 (HTTPS) and TCP:9090 (HTTP) ingress to GKE/TFE pods subnet from CIDR ranges of your monitoring/observability tool (for scraping TFE metrics endpoints)
  • Allow TCP:8443 (HTTPS) and TCP:8080 (HTTP) ingress to GKE/TFE pods subnet from TFE load balancer subnet (for TFE application traffic)
  • Allow TCP:5432 ingress to database subnet from GKE/TFE pods subnet (for PostgreSQL traffic)
  • Allow TCP:6379 ingress to Redis subnet from GKE/TFE pods subnet (for Redis TLS traffic)
  • Allow TCP:8201 between nodes on GKE/TFE pods subnet (for TFE embedded Vault internal cluster traffic)
  • Allow TCP:443 egress to Terraform endpoints listed here from GKE/TFE pods subnet
  • If your GKE cluster is private, your client/workstation must be able to access the control plane via kubectl and helm
  • Be familiar with the TFE ingress requirements
  • Be familiar with the TFE egress requirements

TLS certificates

  • TLS certificate (e.g. cert.pem) and private key (e.g. privkey.pem) that matches your chosen fully qualified domain name (FQDN) for TFE
    • TLS certificate and private key must be in PEM format
    • Private key must not be password protected
  • TLS certificate authority (CA) bundle (e.g. ca_bundle.pem) corresponding with the CA that issues your TFE TLS certificates
    • CA bundle must be in PEM format
    • You may include additional certificate chains corresponding to external systems that TFE will make outbound connections to (e.g. your self-hosted VCS, if its certificate was issued by a different CA than your TFE certificate)

Secret management

GCP Secret Manager secrets:

  • PostgreSQL database password secret

Compute (optional)

If you plan to create a new GKE cluster using this module, then you may skip this section. Otherwise:

  • GKE cluster

Usage

  1. Create/configure/validate the applicable prerequisites.

  2. Nested within the examples directory are subdirectories that contain ready-made Terraform configurations of example scenarios for how to call and deploy this module. To get started, choose an example scenario. If you are starting without an existing GKE cluster, then you should select the new-gke example scenario.

  3. Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your Terraform configuration that will manage your TFE deployment. If you are not sure where to create this new directory, it is common for users to create an environments/ directory at the root of this repo (once you have cloned it down locally), and then a subdirectory for each TFE instance deployment, like so:

    .
    └── environments
        ├── production
        │   ├── backend.tf
        │   ├── main.tf
        │   ├── outputs.tf
        │   ├── terraform.tfvars
        │   └── variables.tf
        └── sandbox
            ├── backend.tf
            ├── main.tf
            ├── outputs.tf
            ├── terraform.tfvars
            └── variables.tf
    

    📝 Note: In this example, the user will have two separate TFE deployments; one for their sandbox environment, and one for their production environment. This is recommended, but not required.

  4. (Optional) Uncomment and update the GCS remote state backend configuration provided in the backend.tf file with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your TFE deployment (if you are in a sandbox environment, for example).

  5. Populate your own custom values into the terraform.tfvars.example file that was provided (in particular, values enclosed in the <> characters). Then, remove the .example file extension such that the file is now named terraform.tfvars.

  6. Navigate to the directory of your newly created Terraform configuration for your TFE deployment, and run terraform init, terraform plan, and terraform apply.

The TFE infrastructure resources have now been created. Next comes the application layer portion of the deployment (which we refer to as the Post Steps), which will involve interacting with your GKE cluster via kubectl and installing the TFE application via helm.

Post Steps

  1. Authenticate to your GKE cluster:

    gcloud auth login
    gcloud config set project <PROJECT_ID>
    gcloud container clusters get-credentials <GKE_CLUSTER_NAME> --region <REGION>
  2. Create the Kubernetes namespace for TFE:

    kubectl create namespace tfe

    📝 Note: You can name it something different than tfe if you prefer. If you do name it differently, be sure to update your value of the tfe_kube_namespace and tfe_kube_svc_account input variables accordingly (the Helm chart will automatically create a Kubernetes service account for TFE based on the name of the namespace).

  3. Create the required secrets for your TFE deployment within your new Kubernetes namespace for TFE. There are several ways to do this, whether it be from the CLI via kubectl, or another method involving a third-party secrets helper/tool. See the Kubernetes-Secrets doc for details on the required secrets and how to create them.

  4. This Terraform module will automatically generate a Helm overrides file within your Terraform working directory named ./helm/module_generated_helm_overrides.yaml. This Helm overrides file contains values interpolated from some of the infrastructure resources that were created by Terraform in step 6. Within the Helm overrides file, update or validate the values for the remaining settings that are enclosed in the <> characters. You may also add any additional configuration settings into your Helm overrides file at this time (see the Helm-Overrides doc for more details).

  5. Now that you have customized your module_generated_helm_overrides.yaml file, rename it to something more applicable to your deployment, such as prod_tfe_overrides.yaml (or whatever you prefer). Then, within your terraform.tfvars file, set the value of create_helm_overrides_file to false, as we no longer want the Terraform module to manage this file or generate a new one on a subsequent Terraform run.

  6. Add the HashiCorp Helm registry:

    helm repo add hashicorp https://helm.releases.hashicorp.com

    📝 Note: If you have already added the hashicorp Helm repository, you should run helm repo update hashicorp to ensure that you have the latest version.

  7. Install the TFE application via helm:

    helm install terraform-enterprise hashicorp/terraform-enterprise --namespace <TFE_NAMESPACE> --values <TFE_OVERRIDES_FILE>
  8. Verify the TFE pod(s) are starting successfully:

    View the events within the namespace:

    kubectl get events --namespace <TFE_NAMESPACE>

    View the pod(s) within the namespace:

    kubectl get pods --namespace <TFE_NAMESPACE>

    View the logs from the pod:

    kubectl logs <TFE_POD_NAME> --namespace <TFE_NAMESPACE> -f
  9. If you did not create a DNS record during your Terraform deployment in the previous section (via the boolean input create_tfe_cloud_dns_record), then create a DNS record for your TFE FQDN that resolves to your TFE load balancer, depending on how the load balancer was configured during your TFE deployment:

    • If you are using a Kubernetes service of type LoadBalancer (what the module-generated Helm overrides defaults to), the DNS record should resolve to the static IP address of your TFE load balancer:

      kubectl get services --namespace <TFE_NAMESPACE>
    • If you are using a custom Kubernetes ingress (meaning you customized your Helm overrides in step 10), the DNS record should resolve to the IP address of your ingress controller load balancer:

      kubectl get ingress <INGRESS_NAME> --namespace <INGRESS_NAMESPACE>
  10. Verify the TFE application is ready:

    curl https://<TFE_FQDN>/_health_check
  11. Follow the remaining steps here to finish the installation setup, which involves creating the initial admin user.


Docs

Below are links to various docs related to the customization and management of your TFE deployment:


Requirements

Name Version
terraform >= 1.9
google ~> 5.42
google-beta ~> 5.42
local >= 2.5.1
random >= 3.6.2

Providers

Name Version
google ~> 5.42
google-beta ~> 5.42
local >= 2.5.1
random >= 3.6.2

Resources

Name Type
google-beta_google_project_service_identity.cloud_sql_sa resource
google_compute_address.tfe_lb resource
google_container_cluster.tfe resource
google_container_node_pool.tfe resource
google_dns_record_set.tfe resource
google_kms_crypto_key_iam_binding.cloud_sql_sa_postgres_cmek resource
google_kms_crypto_key_iam_binding.gcp_project_gcs_cmek resource
google_kms_crypto_key_iam_binding.redis_sa_cmek resource
google_project_iam_member.gke_artifact_reader resource
google_project_iam_member.gke_default_node_sa resource
google_project_iam_member.gke_log_writer resource
google_project_iam_member.gke_metric_writer resource
google_project_iam_member.gke_object_viewer resource
google_project_iam_member.gke_stackdriver_writer resource
google_redis_instance.tfe resource
google_service_account.gke resource
google_service_account.tfe resource
google_service_account_iam_binding.tfe_workload_identity resource
google_service_account_key.tfe resource
google_sql_database.tfe resource
google_sql_database_instance.tfe resource
google_sql_user.tfe resource
google_storage_bucket.tfe resource
google_storage_bucket_iam_member.tfe_gcs_object_admin resource
google_storage_bucket_iam_member.tfe_gcs_reader resource
local_file.helm_values_values resource
random_id.gcs_suffix resource
random_id.postgres_suffix resource
google_client_config.current data source
google_compute_network.vpc data source
google_compute_subnetwork.gke data source
google_compute_subnetwork.tfe_lb data source
google_compute_zones.up data source
google_dns_managed_zone.tfe data source
google_kms_crypto_key.postgres data source
google_kms_crypto_key.redis data source
google_kms_crypto_key.tfe_gcs_cmek data source
google_kms_key_ring.postgres data source
google_kms_key_ring.redis data source
google_kms_key_ring.tfe_gcs_cmek data source
google_project.current data source
google_secret_manager_secret_version.tfe_database_password data source

Inputs

Name Description Type Default Required
friendly_name_prefix Prefix used to name all GCP resources uniquely. It is most common to use either an environment (e.g. 'sandbox', 'prod'), a team name, or a project name here. string n/a yes
project_id ID of GCP project to deploy TFE in. string n/a yes
tfe_database_password_secret_version Name of PostgreSQL database password secret to retrieve from GCP Secret Manager. string n/a yes
tfe_fqdn Fully qualified domain name of TFE instance. This name should eventually resolve to the TFE load balancer DNS name or IP address and will be what clients use to access TFE. string n/a yes
vpc_name Name of existing VPC network to create resources in. string n/a yes
cloud_dns_zone_name Name of Google Cloud DNS managed zone to create TFE DNS record in. Only valid when create_cloud_dns_record is true. string null no
common_labels Common labels to apply to all GCP resources. map(string) {} no
create_gke_cluster Boolean to create a GKE cluster. bool false no
create_helm_overrides_file Boolean to generate a YAML file from template with Helm overrides values for your TFE deployment. Set this to false after your initial TFE deployment is complete, as we no longer want the Terraform module to manage it (since you will be customizing it further). bool true no
create_tfe_cloud_dns_record Boolean to create Google Cloud DNS record for TFE using the value of tfe_fqdn for the record name. bool false no
create_tfe_lb_ip Boolean to create a static IP address for TFE load balancer (load balancer is created/managed by Helm/Kubernetes). bool true no
enable_gke_workload_identity Boolean to enable GCP workload identity with GKE cluster. bool true no
gcs_force_destroy Boolean indicating whether to allow force destroying the TFE GCS bucket. GCS bucket can be destroyed if it is not empty when true. bool false no
gcs_kms_cmek_name Name of Cloud KMS customer managed encryption key (CMEK) to use for TFE GCS bucket encryption. string null no
gcs_kms_keyring_name Name of Cloud KMS key ring that contains KMS customer managed encryption key (CMEK) to use for TFE GCS bucket encryption. Geographic location (region) of the key ring must match the location of the TFE GCS bucket. string null no
gcs_location Location of TFE GCS bucket to create. string "US" no
gcs_storage_class Storage class of TFE GCS bucket. string "MULTI_REGIONAL" no
gcs_uniform_bucket_level_access Boolean to enable uniform bucket level access on TFE GCS bucket. bool true no
gcs_versioning_enabled Boolean to enable versioning on TFE GCS bucket. bool true no
gke_cluster_is_private Boolean indicating if GKE network access is private cluster. bool true no
gke_cluster_name Name of GKE cluster to create. string "tfe-gke-cluster" no
gke_control_plane_authorized_cidr CIDR block allowed to access GKE control plane. string null no
gke_control_plane_cidr Control plane IP range of private GKE cluster. Must not overlap with any subnet in GKE cluster's VPC. string "10.0.10.0/28" no
gke_deletion_protection Boolean to enable deletion protection on GKE cluster. bool false no
gke_enable_private_endpoint Boolean to enable private endpoint on GKE cluster. bool true no
gke_http_load_balancing_disabled Boolean to enable HTTP load balancing on GKE cluster. bool false no
gke_l4_ilb_subsetting_enabled Boolean to enable layer 4 ILB subsetting on GKE cluster. bool true no
gke_node_count Number of GKE nodes per zone number 1 no
gke_node_pool_name Name of node pool to create in GKE cluster. string "tfe-gke-node-pool" no
gke_node_type Size/machine type of GKE nodes. string "e2-standard-4" no
gke_release_channel The channel to use for how frequent Kubernetes updates and features are received. string "REGULAR" no
gke_remove_default_node_pool Boolean to remove the default node pool in GKE cluster. bool true no
gke_subnet_name Name or self_link to existing VPC subnetwork to create GKE cluster in. string null no
postgres_availability_type Availability type of Cloud SQL for PostgreSQL instance. string "REGIONAL" no
postgres_backup_start_time HH:MM time format indicating when daily automatic backups of Cloud SQL for PostgreSQL should run. Defaults to 12 AM (midnight) UTC. string "00:00" no
postgres_disk_size Size in GB of PostgreSQL disk. number 50 no
postgres_insights_config Configuration settings for Cloud SQL for PostgreSQL insights.
object({
query_insights_enabled = bool
query_plans_per_minute = number
query_string_length = number
record_application_tags = bool
record_client_address = bool
})
{
"query_insights_enabled": false,
"query_plans_per_minute": 5,
"query_string_length": 1024,
"record_application_tags": false,
"record_client_address": false
}
no
postgres_kms_cmek_name Name of Cloud KMS customer managed encryption key (CMEK) to use for Cloud SQL for PostgreSQL database instance. string null no
postgres_kms_keyring_name Name of Cloud KMS Key Ring that contains KMS key to use for Cloud SQL for PostgreSQL. Geographic location (region) of key ring must match the location of the TFE Cloud SQL for PostgreSQL database instance. string null no
postgres_machine_type Machine size of Cloud SQL for PostgreSQL instance. string "db-custom-4-16384" no
postgres_maintenance_window Optional maintenance window settings for the Cloud SQL for PostgreSQL instance.
object({
day = number
hour = number
update_track = string
})
{
"day": 7,
"hour": 0,
"update_track": "stable"
}
no
postgres_ssl_mode Indicates whether to enforce TLS/SSL connections to the Cloud SQL for PostgreSQL instance. string "ENCRYPTED_ONLY" no
postgres_version PostgreSQL version to use. string "POSTGRES_16" no
redis_auth_enabled Boolean to enable authentication on Redis instance. bool true no
redis_connect_mode Network connection mode for Redis instance. string "PRIVATE_SERVICE_ACCESS" no
redis_kms_cmek_name Name of Cloud KMS customer managed encryption key (CMEK) to use for TFE Redis instance. string null no
redis_kms_keyring_name Name of Cloud KMS key ring that contains KMS customer managed encryption key (CMEK) to use for TFE Redis instance. Geographic location (region) of key ring must match the location of the TFE Redis instance. string null no
redis_memory_size_gb The size of the Redis instance in GiB. number 6 no
redis_tier The service tier of the Redis instance. Defaults to STANDARD_HA for high availability. string "STANDARD_HA" no
redis_transit_encryption_mode Determines transit encryption (TLS) mode for Redis instance. string "DISABLED" no
redis_version The version of Redis software. string "REDIS_7_2" no
tfe_cloud_dns_record_ip_address IP address of DNS record for TFE. Only valid when create_cloud_dns_record is true and create_tfe_lb_ip is false. string null no
tfe_database_name Name of TFE PostgreSQL database to create. string "tfe" no
tfe_database_parameters Additional parameters to pass into the TFE database settings for the PostgreSQL connection URI. string "sslmode=require" no
tfe_database_user Name of TFE PostgreSQL database user to create. string "tfe" no
tfe_http_port HTTP port number that the TFE application will listen on within the TFE pods. It is recommended to leave this as the default value. number 8080 no
tfe_https_port HTTPS port number that the TFE application will listen on within the TFE pods. It is recommended to leave this as the default value. number 8443 no
tfe_kube_namespace Name of Kubernetes namespace for TFE (created by Helm chart). Used to configure GCP workload identity with GKE. string "tfe" no
tfe_kube_svc_account Name of Kubernetes Service Account for TFE (created by Helm chart). Used to configure GCP workload identity with GKE. string "tfe" no
tfe_lb_ip_address IP address to assign to TFE load balancer. Must be a valid IP address from tfe_lb_subnet_name when tfe_lb_ip_address_type is INTERNAL. string null no
tfe_lb_ip_address_type Type of IP address to assign to TFE load balancer. Valid values are 'INTERNAL' or 'EXTERNAL'. string "INTERNAL" no
tfe_lb_subnet_name Name or self_link to existing VPC subnetwork to create TFE internal load balancer IP address in. string null no
tfe_metrics_http_port HTTP port number that the TFE metrics endpoint will listen on within the TFE pods. It is recommended to leave this as the default value. number 9090 no
tfe_metrics_https_port HTTPS port number that the TFE metrics endpoint will listen on within the TFE pods. It is recommended to leave this as the default value. number 9091 no
vpc_project_id ID of GCP Project where the existing VPC resides if it is different than the default project. string null no

Outputs

Name Description
gke_cluster_name Name of TFE GKE cluster.
redis_server_ca_certs CA certificate of TFE Redis instance. Add this to your TFE CA bundle.
tfe_database_host IP address and port of TFE Cloud SQL for PostgreSQL database instance.
tfe_database_instance_id ID of TFE Cloud SQL for PostgreSQL database instance.
tfe_database_password TFE PostgreSQL database password.
tfe_database_password_base64 Base64-encoded TFE PostgreSQL database password.
tfe_lb_ip_address IP address of TFE load balancer.
tfe_lb_ip_address_name Name of IP address resource of TFE load balancer.
tfe_object_storage_google_bucket Name of TFE GCS bucket.
tfe_redis_host Hostname/IP address (and port if non-default) of TFE Redis instance.
tfe_redis_password Auth string of TFE Redis instance.
tfe_redis_password_base64 Base64-encoded auth string of TFE Redis instance.
tfe_service_account_email TFE GCP service account email address. Only produced when enable_gke_workload_identity is true.
tfe_service_account_key TFE GCP service account key in JSON format, base64-encoded. Only produced when enable_gke_workload_identity is false.

About

A Terraform module for provisioning and installing Terraform Enterprise on Google GKE as described in HashiCorp Validated Designs

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published