- kind -
0.11.0 - docker -
20.10.7 - psql
init zalando-operator submodules from official repository
git submodule initDuring this tutorial we will be using kind to provision k8s cluster on docker, with following configuration:
- 1 master node
- 3 worker nodes
verify if any cluster is available
kind get clusterscreate k8s cluster
kind create cluster --config manifests/kind/cluster.ymlverify if kubectl context is switched to proper cluster
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 54s v1.21.1
kind-worker Ready <none> 25s v1.21.1
kind-worker2 Ready <none> 25s v1.21.1
kind-worker3 Ready <none> 25s v1.21.1cd postgres-operator
kubectl create -f manifests/configmap.yaml
kubectl create -f manifests/operator-service-account-rbac.yaml
kubectl create -f manifests/postgres-operator.yaml
kubectl create -f manifests/api-service.yaml check if operator is running
kubectl get pod -l name=postgres-operator
NAME READY STATUS RESTARTS AGE
postgres-operator-55b8549cff-84gp7 1/1 Running 0 118skubectl apply -f postgres-operator/ui/manifests/verify if its running
kubectl get pod -l name=postgres-operator-uiport-forward the operator UI service to localhost
kubectl port-forward svc/postgres-operator-ui 8081:80then go to http://localhost:8081 and explore
kubectl apply -f manifests/postgres-12-cluster.ymlport-forward the master node to localhost
export PGPASSWORD=$(kubectl get secret postgres.acid-first.credentials -o 'jsonpath={.data.password}' | base64 -d)
export PGMASTER=$(kubectl get pods -o jsonpath={.items..metadata.name} -l application=spilo,cluster-name=acid-first,spilo-role=master -n default)
kubectl port-forward $PGMASTER 6432:5432 -n defaultconnect to master instance
$ psql -U postgres -h localhost -p 6432
psql (12.7 (Ubuntu 12.7-0ubuntu0.20.04.1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.verify the instance version
SELECT version();cerate sample table within new database
CREATE DATABASE ssu_operator_labs;
#connect to created database
\c ssu_operator_labs
CREATE TABLE ssu_tab (random_str text);
#list all tables that matches following regex
\dt ssu*seed sample database with data
# seed the values
INSERT INTO ssu_tab (random_str) VALUES ('Magiera'), ('Dygas'), ('Bodera');
SELECT * FROM ssu_tab;
random_str
------------
Magiera
Dygas
Bodera
(3 rows)verify if the data was replicated to the slave cluster
kubectl port-forward acid-first-1 6433:5432 -n defaultand then inspect the values
\c ssu_operator_labs
SELECT * FROM ssu_tab;open another terminal session and watch pods status
kubectl get pods -w
change the spec.postgresql.version from 12 to 13 in postgres/v12-cluster.yml (example in postgres/v13-cluster.yml) and apply edited config:
kubectl apply -f manifests/postgres-13-cluster.yml and wait till all pods will be in running state again
then verify the version on master node
SELECT version();in this example we will try If our setup if capable of handling failover
get the current master name
$ kubectl get pods -o jsonpath={.items..metadata.name} -l application=spilo,cluster-name=acid-first,spilo-role=master -n default
acid-first-2delete the master instance
$ kubectl delete pod acid-first-2
pod "acid-first-2" deletedcheck if new master will be elected
$ kubectl get pods -o jsonpath={.items..metadata.name} -l application=spilo,cluster-name=acid-first,spilo-role=master -n default
acid-first-0then check which one will be switched to readonly mode and if current master is writeable