Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reclaim policy switching to Retain instead of Delete #126

Closed
tobiasvdp opened this issue Feb 13, 2019 · 6 comments
Closed

Reclaim policy switching to Retain instead of Delete #126

tobiasvdp opened this issue Feb 13, 2019 · 6 comments

Comments

@tobiasvdp
Copy link

Is it safe to recreate the storage class with reclaim policy: Retain? For data safety when we need to resize for example it's dangerous to delete the volume when the claim gets removed.

What did you do? (required. The issue will be closed when not provided.)

Deleted claim resulting in the block storage being deleted

What did you expect to happen?

the option for reclaimoption Retain

Configuration (MUST fill this out):

Bare bones Kube 1.13 with csi applied to.

@welcome
Copy link

welcome bot commented Feb 13, 2019

Thank you for creating the issue! One of our team members will get back to you shortly with additional information.

@Berndinox
Copy link

Berndinox commented Feb 18, 2019

It does work for me...:

Create a Second StorageClass with the reclaimPolicy set to Retain:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
{"storageclass.kubernetes.io/is-default-class":"false"},"name":"do-block-storage"},"provisioner":"dobs.csi.digitalocean.com"}
    storageclass.kubernetes.io/is-default-class: "false"
  name: do-block-storage-persist
provisioner: dobs.csi.digitalocean.com
reclaimPolicy: Retain
volumeBindingMode: Immediate

Regarding your needs modify the PVC like:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sql-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: do-block-storage-persist

Notice the StorageClassName (last Line):

  • do-block-storage-persist

or to use the "Delete" Policy:

  • do-block-storage

Verify: kubectl get pv

@kallisti5
Copy link

I just ran into this issue. Could this be documented? The normal way involves applying this to the PersistentVolume vs the PersistentVolumeClaim

@kallisti5
Copy link

actually, I changed the storageClassName to do-block-storage-persist. While it applies to my k8s cluster, the volume doesn't actually get created.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gerrit-data-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 6Gi
  storageClassName: do-block-storage-persist

@kallisti5
Copy link

$ kubectl apply -f git.yml
deployment.apps "git-deployment" created
persistentvolumeclaim "gerrit-data-pvc" created
service "cgit" created
service "review" created
ingress.extensions "git" created

$ kubectl get pvc
NAME              STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS               AGE
gerrit-data-pvc   Pending                                       do-block-storage-persist   5m


$ kubectl get pv
No resources found.

Actually, just looks like do-block-storage-persist is broken and never allocates. It doesn't show up in the DO volume dashboard.

@Berndinox
Copy link

Berndinox commented Mar 9, 2019

Hy,

i set the wron name:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
{"storageclass.kubernetes.io/is-default-class":"false"},"name":"do-block-storage-persist"},"provisioner":"dobs.csi.digitalocean.com"}
    storageclass.kubernetes.io/is-default-class: "false"
  name: do-block-storage-persist
provisioner: dobs.csi.digitalocean.com
reclaimPolicy: Retain
volumeBindingMode: Immediate

So you have used a non existing Storage Class.
Try: kubectl get storageclass
Recreate the StorageClass with proper name.

Br

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants