You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When restoring a statefulset backed by managed-nfs-storage, the restic-wait init container mounted a shared volume in only one member (meta is the shared pvc):
c-db2wh-test-db2u-0
restic-wait:
Container ID: cri-o://57c509cebc5296859d892e3d20f25f79952fe7638305776bfae0c5821b8f8ab4
Image: quay.io/konveyor/velero-restic-restore-helper@sha256:c16248d027c09cd5b7360ae38484df88ca1cc01f9e470e3eade95076ee1a6b84
Image ID: quay.io/konveyor/velero-restic-restore-helper@sha256:55b0bc00b8603f8ac55fee791451a089dbadc648af498dc76647cf280b711ea2
...
Mounts:
/restores/data from data (rw)
/restores/meta from meta (rw)
/restores/tempts from tempts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hmj5t (ro)
c-db2wh-test-db2u-1
Mounts:
/restores/data from data (rw)
/restores/tempts from tempts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hmj5t (ro)
As a result, one of the pods became Ready before this volume was fully restored, causing problems.
Here is the volumes defn in the statefulset:
volumeMounts:
- mountPath: /mnt/blumeta0
name: meta
...
volumes:
- name: meta
persistentVolumeClaim:
claimName: c-db2u-test-meta
What happened?
It is unclear if this has ALWAYS been the behaviour of the restic-wait init-container for statefulsets or this is a new issue. For instance, how can a member of a statefulset proceed before a shared volume is restored?
OADP Version
0.5.x (Stable)
OpenShift Version
4.10
Velero pod logs
No response
Restic pod logs
No response
Operator pod logs
No response
New issue
This issue is new
The text was updated successfully, but these errors were encountered:
Contact Details
david.mitchell@ibm.com
Describe bug
When restoring a statefulset backed by managed-nfs-storage, the restic-wait init container mounted a shared volume in only one member (meta is the shared pvc):
c-db2wh-test-db2u-0
c-db2wh-test-db2u-1
As a result, one of the pods became Ready before this volume was fully restored, causing problems.
Here is the volumes defn in the statefulset:
What happened?
It is unclear if this has ALWAYS been the behaviour of the restic-wait init-container for statefulsets or this is a new issue. For instance, how can a member of a statefulset proceed before a shared volume is restored?
OADP Version
0.5.x (Stable)
OpenShift Version
4.10
Velero pod logs
No response
Restic pod logs
No response
Operator pod logs
No response
New issue
The text was updated successfully, but these errors were encountered: