title | description | author | tags | date_published |
---|---|---|---|---|
Install Anthos Service Mesh with an in-cluster control plane on GKE with Terraform |
Use Terraform to deploy a Kubernetes Engine cluster and Anthos Service Mesh with an in-cluster control plane. |
ameer00 |
Kubernetes Engine, ASM |
2021-07-28 |
Ameer Abbas | Solutions Architect | Google
Contributed by Google employees.
This tutorial shows you how to install Anthos Service Mesh 1.9 with an in-cluster control plane on a Google Kubernetes Engine (GKE) cluster using the GKE Anthos Service Mesh Terraform submodule.
- Use the GKE Anthos Service Mesh Terraform submodule to do the following:
- Create a Virtual Private Cloud (VPC) network.
- Create a GKE cluster.
- Install Anthos Service Mesh 1.9.
- Deploy the Online Boutique sample app on an Anthos Service Mesh.
- Clean up or destroy all resources with Terraform.
This tutorial uses the following Google Cloud products:
Use the pricing calculator to generate a cost estimate based on your projected usage.
-
Verify that billing is enabled for your project.
-
Enable the required APIs:
gcloud services enable \ cloudresourcemanager.googleapis.com \ container.googleapis.com
-
Install Krew, the package manager for kubectl plugins:
( set -x; cd "$(mktemp -d)" && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.{tar.gz,yaml}" && tar zxvf krew.tar.gz && KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" && $KREW install --manifest=krew.yaml --archive=krew.tar.gz && $KREW update ) echo -e "export PATH="${PATH}:${HOME}/.krew/bin"" >> ~/.bashrc && source ~/.bashrc
-
Install the ctx, ns, and neat plugins:
kubectl krew install ctx ns neat
-
Install kpt:
sudo apt-get update && sudo apt-get install -y google-cloud-sdk-kpt netcat
-
Set an environment variable for your project ID, replacing
[YOUR_PROJECT_ID]
with your project ID:export PROJECT_ID=[YOUR_PROJECT_ID]
-
Set the working project to your project:
gcloud config set project ${PROJECT_ID}
-
Set other environment variables:
export PROJECT_NUM=$(gcloud projects describe ${PROJECT_ID} --format='value(projectNumber)') export CLUSTER_1=gke-central export CLUSTER_1_ZONE=us-central1-a export WORKLOAD_POOL=${PROJECT_ID}.svc.id.goog export MESH_ID="proj-${PROJECT_NUM}" export TERRAFORM_SA="terraform-sa" export ASM_MAJOR_VERSION=1.9 export ASM_VERSION=1.9.5-asm.2 export ASM_REV=asm-195-2 export ASM_MCP_REV=asm-managed
-
Create a
WORKDIR
folder:mkdir -p asm-tf && cd asm-tf && export WORKDIR=`pwd`
-
Create a
KUBECONFIG
file for this tutorial:touch asm-kubeconfig && export KUBECONFIG=`pwd`/asm-kubeconfig
-
Verify that your Terraform version is 0.13.
If you don't have Terraform version 0.13, then download and install Terraform version 0.13:
wget https://releases.hashicorp.com/terraform/0.13.7/terraform_0.13.7_linux_amd64.zip unzip terraform_0.13.7_linux_amd64.zip export TERRAFORM_CMD=`pwd`/terraform # Path of your terraform binary
You can use Terraform to customize your in-cluster Anthos Service Mesh control plane using IstioOperator
resource files. In this tutorial, you use a simple
IstioOperator
resource file to customize youristio-ingressgateways
.
-
Create a Google Cloud service account and give it the following roles:
gcloud --project=${PROJECT_ID} iam service-accounts create ${TERRAFORM_SA} \ --description="terraform-sa" \ --display-name=${TERRAFORM_SA} ROLES=( 'roles/servicemanagement.admin' \ 'roles/storage.admin' \ 'roles/serviceusage.serviceUsageAdmin' \ 'roles/meshconfig.admin' \ 'roles/compute.admin' \ 'roles/container.admin' \ 'roles/resourcemanager.projectIamAdmin' \ 'roles/iam.serviceAccountAdmin' \ 'roles/iam.serviceAccountUser' \ 'roles/iam.serviceAccountKeyAdmin' \ 'roles/gkehub.admin') for role in "${ROLES[@]}" do gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member "serviceAccount:${TERRAFORM_SA}@${PROJECT_ID}.iam.gserviceaccount.com" \ --role="$role" done
-
Create the service account credential JSON key for Terraform:
gcloud iam service-accounts keys create ${TERRAFORM_SA}.json \ --iam-account=${TERRAFORM_SA}@${PROJECT_ID}.iam.gserviceaccount.com
-
Set the Terraform credentials and project ID:
export GOOGLE_APPLICATION_CREDENTIALS=`pwd`/${TERRAFORM_SA}.json export TF_VAR_project_id=${PROJECT_ID}
-
Create a Cloud Storage bucket and the backend resource for the Terraform state file:
gsutil mb -p ${PROJECT_ID} gs://${PROJECT_ID} gsutil versioning set on gs://${PROJECT_ID} cat <<'EOF' > backend.tf_tmpl terraform { backend "gcs" { bucket = "${PROJECT_ID}" prefix = "tfstate" } } EOF envsubst < backend.tf_tmpl > backend.tf
-
Create a custom overlay file:
cat <<EOF > ${WORKDIR}/custom_ingress_gateway.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: components: ingressGateways: - name: istio-ingressgateway enabled: true k8s: hpaSpec: maxReplicas: 10 minReplicas: 2 EOF
In this section, you create and apply Terraform files that define the deployment of a VPC network, GKE cluster, and Anthos Service Mesh.
-
Create the
main.tf
,variables.tf
, andoutput.tf
files:cat <<'EOF' > main.tf_tmpl data "google_client_config" "default" {} provider "kubernetes" { host = "https://${module.gke.endpoint}" token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(module.gke.ca_certificate) } data "google_project" "project" { project_id = var.project_id } module "vpc" { source = "terraform-google-modules/network/google" version = "~> 3.0" project_id = var.project_id network_name = var.network routing_mode = "GLOBAL" subnets = [ { subnet_name = var.subnetwork subnet_ip = var.subnetwork_ip_range subnet_region = var.region } ] secondary_ranges = { (var.subnetwork) = [ { range_name = var.ip_range_pods ip_cidr_range = var.ip_range_pods_cidr }, { range_name = var.ip_range_services ip_cidr_range = var.ip_range_services_cidr } ] } } module "gke" { source = "terraform-google-modules/kubernetes-engine/google" project_id = var.project_id name = var.cluster_name regional = false region = var.region zones = var.zones release_channel = "REGULAR" network = module.vpc.network_name subnetwork = module.vpc.subnets_names[0] ip_range_pods = var.ip_range_pods ip_range_services = var.ip_range_services network_policy = false identity_namespace = "enabled" cluster_resource_labels = { "mesh_id" : "proj-${data.google_project.project.number}" } node_pools = [ { name = "asm-node-pool" autoscaling = false auto_upgrade = true # ASM requires minimum 4 nodes and e2-standard-4 node_count = 4 machine_type = "e2-standard-4" }, ] } module "asm" { source = "github.com/terraform-google-modules/terraform-google-kubernetes-engine//modules/asm" cluster_name = module.gke.name cluster_endpoint = module.gke.endpoint project_id = var.project_id location = module.gke.location enable_all = false enable_cluster_roles = true enable_cluster_labels = false enable_gcp_apis = true enable_gcp_iam_roles = true enable_gcp_components = true enable_registration = false asm_version = "1.9" managed_control_plane = false service_account = "${TERRAFORM_SA}@${PROJECT_ID}.iam.gserviceaccount.com" key_file = "./${TERRAFORM_SA}.json" options = ["envoy-access-log,egressgateways"] custom_overlays = ["./custom_ingress_gateway.yaml"] skip_validation = false outdir = "./${module.gke.name}-outdir-${var.asm_version}" # ca = "citadel" # ca_certs = { # "ca_cert" = "./ca-cert.pem" # "ca_key" = "./ca-key.pem" # "root_cert" = "./root-cert.pem" # "cert_chain" = "./cert-chain.pem" # } } EOF cat <<'EOF' > variables.tf variable "project_id" {} variable "cluster_name" { default = "gke-central" } variable "region" { default = "us-central1" } variable "zones" { default = ["us-central1-a"] } variable "network" { default = "asm-vpc" } variable "subnetwork" { default = "subnet-01" } variable "subnetwork_ip_range" { default = "10.10.10.0/24" } variable "ip_range_pods" { default = "subnet-01-pods" } variable "ip_range_pods_cidr" { default = "10.100.0.0/16" } variable "ip_range_services" { default = "subnet-01-services" } variable "ip_range_services_cidr" { default = "10.101.0.0/16" } variable "asm_version" { default = "1.9" } EOF cat <<'EOF' > output.tf output "kubernetes_endpoint" { sensitive = true value = module.gke.endpoint } output "client_token" { sensitive = true value = base64encode(data.google_client_config.default.access_token) } output "ca_certificate" { value = module.gke.ca_certificate } output "service_account" { description = "The default service account used for running nodes." value = module.gke.service_account } EOF envsubst < main.tf_tmpl > main.tf
-
Initialize Terraform and apply the configurations:
${TERRAFORM_CMD} init ${TERRAFORM_CMD} plan ${TERRAFORM_CMD} apply -auto-approve
-
Connect to the GKE cluster:
gcloud container clusters get-credentials ${CLUSTER_1} --zone ${CLUSTER_1_ZONE}
Remember to unset your
KUBECONFIG
variable when you're finished. -
Rename the cluster context for easy switching:
kubectl ctx ${CLUSTER_1}=gke_${PROJECT_ID}_${CLUSTER_1_ZONE}_${CLUSTER_1}
-
Confirm the cluster context:
kubectl ctx
The output is similar to the following:
gke-central
-
Set the Anthos Service Mesh revision variable:
export ASM_REVISION=${ASM_REV}
-
Deploy the Online Boutique app to the GKE cluster:
kpt pkg get \ https://github.com/GoogleCloudPlatform/microservices-demo.git/release \ online-boutique kubectl --context=${CLUSTER_1} create namespace online-boutique kubectl --context=${CLUSTER_1} label namespace online-boutique istio.io/rev=${ASM_REVISION} kubectl --context=${CLUSTER_1} -n online-boutique apply -f online-boutique
-
Wait until all Deployments are ready:
kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment adservice kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment checkoutservice kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment currencyservice kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment emailservice kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment frontend kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment paymentservice kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment productcatalogservice kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment shippingservice kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment cartservice kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment loadgenerator kubectl --context=${CLUSTER_1} -n online-boutique wait --for=condition=available --timeout=5m deployment recommendationservice
The output is similar to the following:
deployment "adservice" successfully rolled out deployment "checkoutservice" successfully rolled out deployment "currencyservice" successfully rolled out deployment "emailservice" successfully rolled out deployment "frontend" successfully rolled out deployment "paymentservice" successfully rolled out deployment "productcatalogservice" successfully rolled out deployment "shippingservice" successfully rolled out deployment "cartservice" successfully rolled out deployment "loadgenerator" successfully rolled out deployment "recommendationservice" successfully rolled out
Run the following command to get the IP address of the external load balancer:
kubectl --context=${CLUSTER_1} -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Now that you've deployed Online Boutique, you can view the Anthos Service Mesh telemetry dashboards to view the metrics for the application.
For more information about metrics, logs, and tracing with Anthos Service Mesh, see Exploring Anthos Service Mesh in the Cloud Console.
Use the terraform destory
command to destroy all Terraform resources:
${TERRAFORM_CMD} destroy -auto-approve
Alternatively, you can delete the project.
Deleting a project has the following consequences:
- If you used an existing project, you'll also delete any other work that you've done in the project.
- You can't reuse the project ID of a deleted project. If you created a custom project ID that you plan to use in the future, delete the resources inside the
project instead. This ensures that URLs that use the project ID, such as an
appspot.com
URL, remain available.
To delete a project, do the following:
- In the Cloud Console, go to the Projects page.
- In the project list, select the project you want to delete and click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
- Learn more about community support for Terraform.
- Learn more about Anthos Service Mesh.