We'll be using the k14s
tools ytt
and kapp
to do some light templating of the kube manifests.
- Install k14s via curl|bash:
curl -s -L https://k14s.io/install.sh | \
K14SIO_INSTALL_BIN_DIR=~/bin bash
- Install gcloud SDK CLI via these instructions.
- Clone this repository:
git clone https://github.com/paulczar/gcc-cloudsql
cd gcc-cloudsql
- Ensure that you have enabled the Cloud IAM Service Account Credentials API, the Service Networking API, and the Service Management API:
gcloud services enable \
servicenetworking.googleapis.com \
servicemanagement.googleapis.com \
iamcredentials.googleapis.com
- Export your GCP Project ID to an environment variable:
PROJECT_ID=$(gcloud config get-value project)
- Create a Small GKE Cluster using
--workload-pool
to enable workload identity and--enable-ip-alias
to create a VPC-native cluster:
gcloud container clusters create gcc-cloudsql \
--num-nodes=1 --zone us-central1-c \
--cluster-version 1.16 --machine-type n1-standard-2 \
--workload-pool=${PROJECT_ID}.svc.id.goog \
--enable-ip-alias
If you want to use a different network you can, just make sure you deploy the GKE cluster to the same network as the SQL instance Peer.
- Once the cluster is created check your access:
kubectl cluster-info
This only needs to be done once per network per project
You could do this with GCC, however GCC won't wait for the Peer to be ready before trying to start the SQL Instance resulting in errors. For an example of doing this via GCC see ./ytt/infrastructure
.
- Create a VPC Peering range:
gcloud compute addresses create cloudsql-peer \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--description="peering range for CloudSQL" \
--network=default \
--project=$PROJECT_ID
- Peer that range with our default network:
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=cloudsql-peer \
--network=default \
--project=$PROJECT_ID
If this command fails with Cannot modify allocated ranges in CreateConnection
then rerun the command but replace connect
with update --force
.
I have included the manifests for installing GCC in workload-identity mode in this repository for ease of use.
- Create a service account for GCC:
gcloud iam service-accounts create cnrm-system
- Bind the
roles/owner
to the service account:
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:cnrm-system@${PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/owner"
- Bind
roles/iam.workloadIdentityUser
to thecnrm-controller-manager
Kubernetes Service Account in thecnrm-system
Namespace:
gcloud iam service-accounts add-iam-policy-binding \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:${PROJECT_ID}.svc.id.goog[cnrm-system/cnrm-controller-manager]" \
cnrm-system@${PROJECT_ID}.iam.gserviceaccount.com
- Create a Kubernetes manifest for GCC using
ytt
:
ytt -f ./ytt/gcc \
--data-value "gcp.projectID=$PROJECT_ID" \
> ./manifests/gcc.yaml
- Deploy GCC using
kapp
:
kapp deploy -a gcc -y \
-f ./manifests/gcc.yaml
Note: You could use kubectl apply -f ./manifests/gcc.yaml
for the above, but kapp
gives you a better view of what is going on.
- Wait until GCC is running:
$ kubectl wait -n cnrm-system --for=condition=Ready pod --all
pod/cnrm-controller-manager-0 condition met
pod/cnrm-deletiondefender-0 condition met
pod/cnrm-resource-stats-recorder-88b54bdd7-6hq9p condition met
pod/cnrm-webhook-manager-7b4db8b7d5-5llfs condition met
- Create a Kubernetes manifest for CloudSQL using
ytt
:
ytt -f ./ytt/cloudsql \
--data-value "gcp.projectID=$PROJECT_ID" \
--data-value "db.rootPassword=this-is-a-bad-password" \
--data-value "name=example" \
--data-value "namespace=cloudsql" \
> ./manifests/cloudsql.yaml
- Deploy CloudSQL using
kapp
:
kapp deploy -a cloudsql -y \
-f ./manifests/cloudsql.yaml
- Wait until the database is ready:
It shouldn't take more than five minutes, but if it does ... just wait longer.
kubectl wait -n cloudsql --for=condition=Ready \
sqlinstance/example-db --timeout=300s
- Get the IP address of the database:
gcloud sql instances describe example-db --format json | jq -r '.ipAddresses[0].ipAddress'
IP=$(!!)
- Remind yourself of the password:
kubectl -n cloudsql get secret example-db -o json | jq -r .data.password | base64 --decode
- Get a prompt in a Pod with the
psql
client:
kubectl -n cloudsql run --env IP=$IP -ti --restart=Never --image postgres:13-alpine --rm psql -- sh
- Connect to your database from inside that pod:
$ psql -h $IP --username=example -d postgres
Password for user example: *******
psql (13beta1, server 9.6.16)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=>
Press Ctrl-D to exit psql
- Connect to your database via the sql proxy:
$ psql -h example-db-proxy --username=example -d postgres
Password for user example: *****
psql (13beta1, server 9.6.16)
Type "help" for help.
postgres=>
Once you're finished, you can clean up like so:
- Delete the cloudsql resources:
kapp delete -y -a cloudsql
- Delete the GKE Cluster
gcloud container clusters delete gcc-cloudsql
- Delete the rest of the resources