-
Notifications
You must be signed in to change notification settings - Fork 197
Closed as not planned
Labels
bugSomething isn't workingSomething isn't workingchart( cluster )Related to the cluster chartRelated to the cluster chart
Description
Hi all. I have a problem with backup TLS certificate. In the following relevant values to reproduce the problem:
type: postgresql
version:
# -- PostgreSQL major version to use
postgresql: "17"
###
# -- Cluster mode of operation. Available modes:
# * `standalone` - default mode. Creates new or updates an existing CNPG cluster.
# * `replica` - Creates a replica cluster from an existing CNPG cluster. # TODO
# * `recovery` - Same as standalone but creates a cluster from a backup, object store or via pg_basebackup.
mode: standalone
cluster:
instances: 3
imageName: "ghcr.io/cloudnative-pg/postgresql:17.4-13"
storage:
size: 8Gi
storageClass: "local-storage"
resources:
limits:
cpu: 2000m
memory: 8Gi
requests:
cpu: 2000m
memory: 8Gi
logLevel: "info"
certificates:
serverCASecret: "postgres-ca-tls"
serverTLSSecret: "postgres-server-tls"
clientCASecret: "postgres-ca-tls"
replicationTLSSecret: "postgres-replication-tls"
enableSuperuserAccess: true
superuserSecret: "postgres-admin"
monitoring:
enabled: true
podMonitor:
enabled: true
postgresql:
# -- PostgreSQL configuration options (postgresql.conf)
parameters:
max_connections: "250"
# -- BootstrapInitDB is the configuration of the bootstrap process when initdb is used.
# See: https://cloudnative-pg.io/documentation/current/bootstrap/
# See: https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-cnpg-io-v1-bootstrapinitdb
initdb:
database: postgres
secret:
name: "postgres-user"
postInitSQL:
- "CREATE EXTENSION IF NOT EXISTS vector;"
backups:
enabled: true
endpointURL: "https://minio.minio.svc.cluster.local:443"
endpointCA:
name: "minio-ca"
# -- One of `s3`, `azure` or `google`
provider: s3
s3:
bucket: "postgres-backups"
path: "/"
secret:
create: false
name: "minio"
wal:
compression: gzip
encryption: ""
maxParallel: 1
data:
compression: gzip
encryption: ""
jobs: 2
scheduledBackups:
-
name: daily-backup
schedule: "0 0 0 * * *"
backupOwnerReference: self
method: barmanObjectStore
retentionPolicy: "30d"
helm install postgres-cluster cnpg/cluster \
--version 0.3.0 \
--values values.yaml
After the first deployment everything works properly, no communication problems with minio.
The problem occurs after applying the following patch to enable TLS for metrics as well:
kubectl patch cluster postgres-cluster -p '{"spec": {"monitoring": {"tls": {"enabled": true}}}}' --type=merge
From the logs:
{"level":"info","ts":"2025-05-20T09:19:07.740249496Z","logger":"barman-cloud-check-wal-archive","msg":"2025-05-20 09:19:07,739 [34] ERROR: Barman cloud WAL archive check exception: SSL validation failed for https://minio.minio.svc.cluster.local:443/postgres-backups [Errno 2] No such file or directory","pipe":"stderr","logging_pod":"postgres-cluster-1"}
Of course, this does not allow the pods to be ready. I have currently found the following workaround to overcome the problem:
kubectl cp minio_ca.crt postgres-cluster-1:/controller/certificates/backup-barman-ca.crt
But I wonder if there is something wrong with the way I apply the patch and if there is a more robust approach than what I have found currently.
Thank you!
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingchart( cluster )Related to the cluster chartRelated to the cluster chart