When you're asked to Terraform Pluto
-
GCP
: High-performance infrastructure for cloud computing, data analytics & machine learning. Secure, reliable and high performance cloud services. Try now with $300 free credit! Massive Scale. Deploy At Google Scale. Competitive Pricing. Focus On Your Product. Highly Scalable. -
Terraform
: Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. -
Kubernetes
: Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. -
Helm
: Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
- You have
kubectl
installed in your machine. - You have
helm
installed in your machine. - You have
terraform
installed in your machine. - You have
gcloud
installed and configured. - You have a GCP account, with a project.
Before you start using the gcloud CLI and Terraform, you have to install the Google Cloud SDK bundle.
The bundle includes all that are necessary tools to authenticate your requests to your account on.
After you install the gcloud CLI, you need to link your account to the gcloud CLI as follwing:
gcloud --version
gcloud init
This will open a login page where you can authenticate with your credentials.
One more authentication step is necessary to complete the setup:
gcloud auth application-default login
Next, you will be prompted to use the default project or create a new one (if you are unsure, create a new project).
The required API's that need to be enabled are the compute and container ones.
You can enable them with:
gcloud components update
gcloud services enable compute.googleapis.com
gcloud services enable container.googleapis.com
Create dev GKE cluster
- Verify that the Terraform tool has been installed correctly with:
terraform version
- Set the working directory to gcp/gke
cd gcp/gke
- Initialize tge Terraform code
terraform init
- Verify the formatting, and the code validity
terraform fmt
terraform validate
- Plan and apply Terraform code
terraform plan --var-file=dev.tfvars --out=dev_plan_outputs.json
terraform apply "dev_plan_outputs.json"
Repeat the same steps for the production cluster, replace dev.tfvars
by prod.tfvars
, dev_plan_outputs.json
by prod_plan_outputs.json
- Terraform Plan
- Terraform Apply
- To get the dev kubeconfig credentials you can use the following gcloud command:
gcloud container clusters get-credentials bookish-meme-dev --region europe-west1 --project bookish-meme
- Export the kubeconfig file:
export KUBECONFIG=~/.kube/kubeconfig-dev # <--- path to your kubeconfig file, used by kubectl to connect to the cluster API
export KUBE_CONFIG_PATH=~/.kube/kubeconfig-dev # need for Terraform Kubernetes/Helm provider
- Check the cluster nodes
➜ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-bookish-meme-dev-node-pool-23b666f0-zlc6 Ready <none> 3d4h v1.21.4-gke.2300
gke-bookish-meme-dev-node-pool-2c2a0d63-s956 Ready <none> 3d4h v1.21.4-gke.2300
gke-bookish-meme-dev-node-pool-65d00558-6xf5 Ready <none> 3d4h v1.21.4-gke.2300
Same steps needed for the production cluster!
In this setup we are going to use Terraform, Helm and the Terraform Helm Provider to deploy a PostgreSQL on the DEV GKE cluster and PostgreSQL HA cluster on the PROD GKE cluster.
The code is on the postgresql/
folder in the following structure:
➜ tree postgresql/
postgresql/
├── deployment
│ ├── dev.tfvars
│ ├── main.tf
│ ├── prod.tfvars
│ └── variables.tf
└── modules
├── main.tf
└── variables.tf
2 directories, 6 files
- Set the working directory to postgresql/deployment
cd .. && cd postgresql/deployment
- Initialize tge Terraform code
terraform init
- Verify the formatting, and the code validity
terraform fmt
terraform validate
- Plan and apply Terraform code
terraform plan --var-file=dev.tfvars --out=postgresql_dev_plan_outputs.json
Terraform will ask you to add the secret variables such as the PostgreSQL credentials. See the following screenshot and terminal log:
PostgreSQL Terraform Plan Terminal
terraform apply "postgresql_dev_plan_outputs.json"
PostgreSQL Terraform Apply Terminal log
Repeat the same steps for the production cluster, replace
dev.tfvars
byprod.tfvars
,postgresql_dev_plan_outputs.json
bypostgresql_prod_plan_outputs.json
➜ kubectl get all | grep bitnami
pod/bitnami-postgresql-0 1/1 Running 0 25h
service/bitnami-postgresql ClusterIP 10.30.145.220 <none> 5432/TCP 25h
service/bitnami-postgresql-headless ClusterIP None <none> 5432/TCP 25h
statefulset.apps/bitnami-postgresql 1/1 25h
Our awesome Bookish Mem - When you're asked to Terraform Pluto
application code (with the container image) is hosted in this github repository
In this setup we are going to use Terraform, Kubernetes YAML, Helm and the Terraform Helm Provider to deploy the application on the GKE cluster.
The code is on the app/
folder in the following structure:
➜ tree
.
├── deployment
│ ├── dev.tfvars
│ ├── main.tf
│ ├── prod.tfvars
│ └── variables.tf
└── modules
├── app
│ ├── Chart.yaml
│ ├── dev-values.yaml
│ ├── prod-values.yaml
│ └── templates
│ ├── deployment.yaml
│ ├── secret.yaml
│ └── service.yaml
├── main.tf
└── variables.tf
4 directories, 12 files
- Set the working directory to postgresql/deployment
cd ../.. && cd app/deployment
- Initialize the Terraform code
terraform init
- Verify the formatting, and the code validity
terraform fmt
terraform validate
- Plan and apply Terraform code
terraform plan --var-file=dev.tfvars --out=app_dev_plan_outputs.json
Terraform will ask you to add the secret variables such as the PostgreSQL credentials. See the following screenshot and terminal log:
App deployment Terraform Plan Terminal
terraform apply "app_dev_plan_outputs.json"
App Terraform Apply Terminal log
Repeat the same steps for the production cluster, replace
dev.tfvars
byprod.tfvars
,app_dev_plan_outputs.json
byapp_prod_plan_outputs.json
bookish-meme-infrastructure/app on readme [✘!?] using ☁️ bookish-meme/bookish-meme via venv
➜ kubectl get all | grep server
pod/server-f65698478-m4x6n 1/1 Running 0 15h
service/server LoadBalancer 10.30.238.10 35.240.27.105 80:30000/TCP 15h
deployment.apps/server 1/1 1 1 15h
replicaset.apps/server-f65698478 1 1 1 15h
To test the application functionality we need first to grab the cluster HTTP Load Balancer IP, as following:
➜ kubectl get svc/server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
server LoadBalancer 10.30.238.10 35.240.27.105 80:30000/TCP 15h
(gke_bookish-meme_europe-west1_bookish-meme-dev:default)
The service IP is: 35.240.27.105
Now we can run the queries:
- Check the metrics:
- PUT: Saves/updates the given user’s name and date of birth in the database.
Request:
PUT /hello/<username> { “dateOfBirth”: “YYYY-MM-DD” }
Response:204 No Content
.
Response Example:
- GET: Returns hello birthday message for the given user
Request:
Get /hello/<username>
Response:200 OK
Response Examples:
-
CI/CD: well CI/CD for everything, end-to-end CI/CD pipelines will be needed to deploy the infrasturce and the application
-
Use Terraform workspaces (dev and prod)
-
Use Terraform remote state to manage the state files
-
Use a proper Secret Management system to manage and inject secrets.
-
Monitoring and logging
-
Service Mesh and API Management if you will add more services ;)
-
Admission Controller to enforce policies
-
Network Policies
-
RBAC
-
Testing and scanning