Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filestore driver sends error while creating snapshot #221

Closed
chaitanya-baraskar opened this issue Feb 21, 2022 · 3 comments
Closed

Filestore driver sends error while creating snapshot #221

chaitanya-baraskar opened this issue Feb 21, 2022 · 3 comments

Comments

@chaitanya-baraskar
Copy link

chaitanya-baraskar commented Feb 21, 2022

Filestore driver sends error while creating snapshot. Now, I am not sure whether this is expected behaviour or not. While polling for volumesnapshot status using Dynamic Kube cli it gets confusing as to consider error or not. Volumesnapshot eventually sets ReadyToUse flag to true. If VolumeSnapshot is getting created is it necessary to send error stating that current State Created. I have shared sample logs and yaml files I have used.

Deployment with Filestore CSI PVC.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: web-server
          image: nginx
          volumeMounts:
            - mountPath: /usr/share/nginx/html
              name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: pvc-1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-1
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: standard-rwx
  resources:
    requests:
      storage: 1Ti

Sample VolumeSnapshot yaml -

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: test-snapshot
spec:
  volumeSnapshotClassName: csi-gcp-filestore-backup-snap-class
  source:
    persistentVolumeClaimName: pvc-1

VolumeSnapshotClass yaml -

apiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: Delete
driver: filestore.csi.storage.gke.io
kind: VolumeSnapshotClass
metadata:
  annotations:
  creationTimestamp: "2022-02-10T09:57:33Z"
  generation: 1
  name: csi-gcp-filestore-backup-snap-class
  resourceVersion: "104941"
  uid: b50cb0b0-c2ea-4c48-bb50-b8167b4fba4a
parameters:
  type: backup

Error I found in controller pod -

E0221 13:46:40.708660       1 snapshot_controller.go:122] checkandUpdateContentStatus [snapcontent-a0e5a9d7-dbaa-48cb-b560-9939e360e6b9]: error occurred failed to take snapshot of the volume modeInstance/us-west1-c/pvc-a99c1776-34dd-4691-a415-18c7c3b1bea4/vol1: "rpc error: code = DeadlineExceeded desc = Backup <redacted> not yet ready, current state CREATING"


E0221 13:46:56.127621       1 snapshot_controller.go:122] checkandUpdateContentStatus [snapcontent-a0e5a9d7-dbaa-48cb-b560-9939e360e6b9]: error occurred failed to take snapshot of the volume modeInstance/us-west1-c/pvc-a99c1776-34dd-4691-a415-18c7c3b1bea4/vol1: "rpc error: code = DeadlineExceeded desc = Backup <redacted> not yet ready, current state FINALIZING"

I haven't shared all logs. Just took few for sample.

Please let me know if more information is needed.

@mattcary
Copy link
Contributor

Sorry to be slow in replying to this, it got buried.

What do you mean by "sends error"? tl;dr I think this is working as intended, as a byproduct of kubernetes reconciliation. I agree it's confusing, but if your snapshots are eventually ready I don't think it's a bug.

@mattcary
Copy link
Contributor

/close

obsolete

@k8s-ci-robot
Copy link
Contributor

@mattcary: Closing this issue.

In response to this:

/close

obsolete

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants