-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
Environment
Operator Version: ghcr.io/zalando/postgres-operator:v1.14.0
Kubernetes Version: v1.31.6
Spilo Image: ghcr.io/zalando/spilo-17:4.0-p2 (default for v1.14.0)
Cluster Setup: 3 control-plane nodes (kbmaster, kbmaster2, kbmaster3), 2 worker nodes (worker1, worker2), Calico CNI (10.244.0.0/16), load balancer VIP at 192.168.1.68:6443.
Storage: Static PVs (pv-worker1, pv-worker2) with storageClass: standard, local type, path /mnt/data.
I’m encountering an issue with the Zalando Postgres Operator where roles defined in spec.users (e.g., myadmin: [login, createdb]) are not being created as expected, and the associated database (mydb) specified in spec.databases is also not created. The cluster pods deploy successfully, but the custom role myadmin doesn’t appear in \du, and mydb isn’t listed in \l. I’m running version v1.14.0 on a Kubernetes cluster (v1.31.6) and have had to resort to manually creating roles and databases as a workaround.
Is there a specific ConfigMap parameter (e.g., protected_role_names, infrastructure_roles_secret_name) or Patroni configuration step missing that’s required for custom roles to be applied?
Expected Behavior
According to the user documentation (Configure Users and Databases), specifying:
spec:
users:
myadmin: [login, createdb]
databases:
mydb: myadmin
should:
- Create a PostgreSQL role myadmin with LOGIN and CREATEDB privileges.
- Generate a secret (e.g., myadmin.my-postgres-cluster.credentials.postgresql.acid.zalan.do) with credentials.
- Create a database mydb owned by myadmin.
I expect to see myadmin in \du with attributes Login, Create DB and mydb in \l owned by myadmin.
Actual Behavior
- The cluster deploys successfully with two pods (my-postgres-cluster-0 and my-postgres-cluster-1) in the Running state.
- However \du shows only default Spilo/Patroni roles (e.g., admin, postgres, standby), with admin having Create DB, Cannot login. myadmin is not present.
- \l lists only the postgres database; mydb is not created.
- No secret for myadmin (e.g., myadmin.my-postgres-cluster.credentials.postgresql.acid.zalan.do) is generated.
Steps to Reproduce:
- Deploy a clean Kubernetes cluster (v1.31.6 used in my case).
- Set up PersistentVolumes pv-worker1 and pv-worker2:
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-worker1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
local:
path: /mnt/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-worker2
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
local:
path: /mnt/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker2
EOF
- Deploy the operator in postgres-operator namespace:
kubectl create namespace postgres-operator
kubectl apply -f https://raw.githubusercontent.com/zalando/postgres-operator/master/manifests/configmap.yaml -n postgres-operator
curl -O https://raw.githubusercontent.com/zalando/postgres-operator/master/manifests/operator-service-account-rbac.yaml
sed -i 's/namespace: default/namespace: postgres-operator/' operator-service-account-rbac.yaml
kubectl apply -f operator-service-account-rbac.yaml -n postgres-operator
rm operator-service-account-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/zalando/postgres-operator/master/manifests/postgres-operator.yaml -n postgres-operator
kubectl patch deployment postgres-operator -n postgres-operator --type='json' -p='[{"op": "add", "path": "/spec/template/spec/nodeSelector", "value": {"kubernetes.io/hostname": "worker1"}}]'
- Deploy Cluster with custom user and database:
cat <<EOF | kubectl apply -f -
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
name: my-postgres-cluster
namespace: postgres-operator
spec:
teamId: "myteam"
volume:
size: 10Gi
storageClass: standard
numberOfInstances: 2
users:
myadmin: [login, createdb]
databases:
mydb: myadmin
postgresql:
version: "13"
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker1
- worker2
EOF
- Wait for pods to start (e.g., my-postgres-cluster-0 and my-postgres-cluster-1).
- Check roles and datbases:
kubectl exec -it my-postgres-cluster-0 -n postgres-operator -- psql -U postgres -c "\du"
kubectl exec -it my-postgres-cluster-0 -n postgres-operator -- psql -U postgres -c "\l"