-
Notifications
You must be signed in to change notification settings - Fork 635
Migration assistance #3445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migration assistance #3445
Conversation
88a6b32 to
75a9477
Compare
The postgres-startup container now reports when it finds the installed PostgreSQL binaries do not match the specified PostgreSQL version. Some storage providers do not mount the PostgreSQL data volume with correct ownership or permissions. The postgres-startup container now prints those attributes of parent directories when it cannot create or modify a needed file or directory. Issue: [sc-11804] Issue: CrunchyData#2870 Co-authored-by: @cbandy
2f4ad82 to
88e9a62
Compare
|
This KUTTL test only works in GKE, but we could alter it (dropping fsGroups) to get it to work in OpenShift. Easy enough to add additional make targets to our CI for GKE/Openshift, but having separate but parallel additional testing opens up some maintainability issues. |
88e9a62 to
0728a52
Compare
testing/kuttl/e2e-other-gke/cluster-migrate/01-check-pod-readiness.yaml
Outdated
Show resolved
Hide resolved
testing/kuttl/e2e-other-gke/cluster-migrate/02--create-data.yaml
Outdated
Show resolved
Hide resolved
testing/kuttl/e2e-other-gke/cluster-migrate/07--set-collation.yaml
Outdated
Show resolved
Hide resolved
testing/kuttl/e2e-other-gke/cluster-migrate/10--check-data.yaml
Outdated
Show resolved
Hide resolved
| delete: | ||
| - apiVersion: postgres-operator.crunchydata.com/v1beta1 | ||
| kind: PostgresCluster | ||
| name: cluster-migrate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 We don't usually delete. Is there a benefit to it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This came up as a possible topic around our CI best practice: KUTTL will delete ns (unless set not to), but it doesn't consider deletion of namespace as part of the test, so it doesn't wait for the ns to be gone before starting the next test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That said, this delete was actually doing double-duty:
- it's maybe best practice for our CI system
- I was deleting this cluster and making sure it was gone before manually deleting the PV. (I'm not manually deleting any more, since we switch the Reclaim policy back to what it was before.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After further discussion, I'm removing the delete for now: this will either need to be a convention we decide on (either always or never deleting), or possibly a change in KUTTL, which could be changed to wait for the ns to delete before marking the test as done.
PostgreSQL won't to start unless it owns the data directory. Kubernetes sets the group according to fsGroup but not the owner. The postgres-startup container now recreates the data directory to give it a new owner when permissions are sufficient to do so. It now raises an error when the owner is incorrect and cannot be changed. Issue: [sc-15909] See: https://docs.k8s.io/tasks/configure-pod-container/security-context/ Co-authored-by: @cbandy
1763f17 to
cd187b7
Compare
Issue: [sc-15909]
cd187b7 to
5347655
Compare
Type of Changes:
What is the current behavior (link to any open issues here)?
Migration (from Bitnami image + Helm chart) ran into problems and needed manual remediation.
What is the new behavior (if this is a feature change)?
This automates the manual remediation for that or similar cases
Other Information:
Issue: [sc-11804]
Issue: [sc-15909]