Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CreateVolumeFromSnapshot Functionality for nfs #226

Merged
merged 12 commits into from
Aug 4, 2023

Conversation

VamsiSiddu-7
Copy link
Contributor

@VamsiSiddu-7 VamsiSiddu-7 commented Aug 2, 2023

Description

This pr adds the support for creating nfs filesytem from snapshot
Added new helm test for nfs

GitHub Issues

List the GitHub issues impacted by this PR:

GitHub Issue #
dell/csm#763

Checklist:

  • I have performed a self-review of my own code to ensure there are no formatting, vetting, linting, or security issues
  • I have verified that new and existing unit tests pass locally with my changes
  • I have not allowed coverage numbers to degenerate
  • I have maintained at least 90% code coverage
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • Backward compatibility is not broken

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Please also list any relevant details for your test configuration

  • Installed the csi-powerflex driver with the changes and Tested CreateVolumeFromSnapshot the functionality
 time="2023-08-01T16:26:08Z" level=info msg="/csi.v1.Controller/CreateVolume: REQ 0084: Name=k8s-3f59afadd9, CapacityRange=required_bytes:8589934592 , VolumeCapabilities=[mount:<fs_type:\"nfs\" > access_mode:<mode:SINGLE_NODE_MULTI_WRITER > ], Parameters=map[allowRoot:true csi.storage.k8s.io/pv/name:k8s-3f59afadd9 csi.storage.k8s.io/pvc/name:restorepvc csi.storage.k8s.io/pvc/namespace:helmtest-vxflexos nasName:env8nasserver storagepool:Env8-SP-SW_SSD-1 systemID:xxxxxxx], VolumeContentSource=snapshot:<snapshot_id:\"xxxxxx/64c931f8-564c-9287-96a1-3a7645b0a943\" > , AccessibilityRequirements=requisite:<segments:<key:\"csi-vxflexos.dellemc.com/xxxxx-nfs\" value:\"true\" > > preferred:<segments:<key:\"csi-vxflexos.dellemc.com/xxxxx-nfs\" value:\"true\" > > , XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
time="2023-08-01T16:26:08Z" level=info msg="getSystemIDFromParameters system xxxxxx"
time="2023-08-01T16:26:08Z" level=info msg="Use systemID as xxxxxxx"
time="2023-08-01T16:26:08Z" level=info msg="Found topology constraint: VxFlex OS system: xxxxxx-nfs"
time="2023-08-01T16:26:08Z" level=info msg="Added accessible topology segment for volume: k8s-3f59afadd9, segment: csi-vxflexos.dellemc.com/xxxxxxx= true"
time="2023-08-01T16:26:08Z" level=info msg="Accessible topology for volume: k8s-3f59afadd9, segments: map[string]string{\"csi-vxflexos.dellemc.com/xxxxxxxx\":\"true\"}"
time="2023-08-01T16:26:09Z" level=info msg="Protection Domain name not provided; there could be conflicts if two storage pools share a name"
snapshotSource.SnapshotId xxxxxxx/64c931f8-564c-9287-96a1-3a7645b0a943
time="2023-08-01T16:26:09Z" level=info msg="snapshot xxxxxxxx/64c931f8-564c-9287-96a1-3a7645b0a943 specified as volume content source"
time="2023-08-01T16:26:13Z" level=info msg="Volume (from snap) k8s-a511127e0d (xxxxxxxxx/64c931cf-b58a-bdc8-c284-3a7645b0a943) storage pool xxxxxx"
time="2023-08-01T16:26:13Z" level=info msg="/csi.v1.Controller/CreateVolume: REP 0084: Volume=capacity_bytes:8589934592 volume_id:\"xxxxxx/64c931cf-b58a-bdc8-c284-3a7645b0a943\" volume_context:<key:\"CreationTime\" value:\"1970-01-01 00:00:00 +0000 UTC\" > volume_context:<key:\"InstallationID\" value:\"04cfcf6a7ded067f\" > volume_context:<key:\"Name\" value:\"k8s-a511127e0d\" > volume_context:<key:\"NasServerID\" value:\"64132f37-d33e-9d4a-89ba-d625520a4779\" > volume_context:<key:\"StoragePoolID\" value:\"xxxxx\" > volume_context:<key:\"StoragePoolName\" value:\"xxxxx\" > volume_context:<key:\"StorageSystem\" value:\"xxxxxx\" > volume_context:<key:\"fsType\" value:\"nfs\" > content_source:<snapshot:<snapshot_id:\"xxxxxxxx/64c931f8-564c-9287-96a1-3a7645b0a943\" > > , XXX_NoUnkeyedLiteral={}, XXX_sizecache=0"
  • Verified the functionality by running the restoresnapshot helm test.Functionality is working fine.
[root@master-1-wkC8qnRhY5M5A helm]# ./snaprestoretest-nfs.sh
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config
NAME: 1vol-nfs
LAST DEPLOYED: Tue Aug  1 12:24:45 2023
NAMESPACE: helmtest-vxflexos
STATUS: deployed
REVISION: 1
TEST SUITE: None
Name:             vxflextest-0
Namespace:        helmtest-vxflexos
Priority:         0
Service Account:  vxflextest
Node:             worker-2-wkc8qnrhy5m5a.domain/10.247.66.179
Start Time:       Tue, 01 Aug 2023 12:25:02 -0400
Labels:           app=vxflextest
                  controller-revision-hash=vxflextest-6567d59b8b
                  statefulset.kubernetes.io/pod-name=vxflextest-0
Annotations:      <none>
Status:           Running
IP:               10.244.2.147
IPs:
  IP:           10.244.2.147
Controlled By:  StatefulSet/vxflextest
Containers:
  test:
    Container ID:  containerd://80bdec52b05b3754f56e848c3c5d61e75803419e893fcfeb16c1040118b96207
    Image:         docker.io/centos:latest
    Image ID:      docker.io/library/centos@sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sleep
      3600
    State:          Running
      Started:      Tue, 01 Aug 2023 12:25:08 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data0 from pvol0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmtmf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  pvol0:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvol0
    ReadOnly:   false
  kube-api-access-gmtmf:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason                  Age   From                     Message
  ----    ------                  ----  ----                     -------
  Normal  Scheduled               14s   default-scheduler        Successfully assigned helmtest-vxflexos/vxflextest-0 to worker-2-wkc8qnrhy5m5a.domain
  Normal  SuccessfulAttachVolume  12s   attachdetach-controller  AttachVolume.Attach succeeded for volume "k8s-a511127e0d"
  Normal  Pulling                 8s    kubelet                  Pulling image "docker.io/centos:latest"
  Normal  Pulled                  8s    kubelet                  Successfully pulled image "docker.io/centos:latest" in 270.85374ms
  Normal  Created                 8s    kubelet                  Created container test
  Normal  Started                 8s    kubelet                  Started container test
Tue Aug  1 12:25:26 EDT 2023
running 1 / 1
NAME           READY   STATUS    RESTARTS   AGE
vxflextest-0   1/1     Running   0          40s
Name:             vxflextest-0
Namespace:        helmtest-vxflexos
Priority:         0
Service Account:  vxflextest
Node:             worker-2-wkc8qnrhy5m5a.domain/10.247.66.179
Start Time:       Tue, 01 Aug 2023 12:25:02 -0400
Labels:           app=vxflextest
                  controller-revision-hash=vxflextest-6567d59b8b
                  statefulset.kubernetes.io/pod-name=vxflextest-0
Annotations:      <none>
Status:           Running
IP:               10.244.2.147
IPs:
  IP:           10.244.2.147
Controlled By:  StatefulSet/vxflextest
Containers:
  test:
    Container ID:  containerd://80bdec52b05b3754f56e848c3c5d61e75803419e893fcfeb16c1040118b96207
    Image:         docker.io/centos:latest
    Image ID:      docker.io/library/centos@sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sleep
      3600
    State:          Running
      Started:      Tue, 01 Aug 2023 12:25:08 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data0 from pvol0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmtmf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  pvol0:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvol0
    ReadOnly:   false
  kube-api-access-gmtmf:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason                  Age   From                     Message
  ----    ------                  ----  ----                     -------
  Normal  Scheduled               24s   default-scheduler        Successfully assigned helmtest-vxflexos/vxflextest-0 to worker-2-wkc8qnrhy5m5a.domain
  Normal  SuccessfulAttachVolume  22s   attachdetach-controller  AttachVolume.Attach succeeded for volume "k8s-a511127e0d"
  Normal  Pulling                 18s   kubelet                  Pulling image "docker.io/centos:latest"
  Normal  Pulled                  18s   kubelet                  Successfully pulled image "docker.io/centos:latest" in 270.85374ms
  Normal  Created                 18s   kubelet                  Created container test
  Normal  Started                 18s   kubelet                  Started container test
10.225.109.43:/k8s-a511127e0d   8388608  1582336   6806272  19% /data0
10.225.109.43:/k8s-a511127e0d on /data0 type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.247.66.179,local_lock=none,addr=10.225.109.43)
done installing a 1 volume container
marking volume
total 8
drwxr-xr-x 2 root root 8192 Aug  1 16:24 lost+found
-rw-r--r-- 1 root root    0 Aug  1 16:25 orig
creating snap1 of pvol0
volumesnapshot.snapshot.storage.k8s.io/pvol0-snap1 created
NAME          READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS        SNAPSHOTCONTENT                                    CREATIONTIME   AGE
pvol0-snap1   true         pvol0                               8Gi           vxflexos-snapclass   snapcontent-989d44eb-762e-41de-af34-a5099f53a278   53y            10s
updating container to add a volume sourced from snapshot
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config
Release "1vol-nfs" has been upgraded. Happy Helming!
NAME: 1vol-nfs
LAST DEPLOYED: Tue Aug  1 12:25:37 2023
NAMESPACE: helmtest-vxflexos
STATUS: deployed
REVISION: 2
TEST SUITE: None
waiting for container to upgrade/stabalize
Tue Aug  1 12:26:08 EDT 2023
running 0 / 1
NAME           READY   STATUS        RESTARTS   AGE
vxflextest-0   1/1     Terminating   0          82s
Tue Aug  1 12:26:18 EDT 2023
running 0 / 1
NAME           READY   STATUS              RESTARTS   AGE
vxflextest-0   0/1     ContainerCreating   0          10s
Tue Aug  1 12:26:28 EDT 2023
running 1 / 1
NAME           READY   STATUS    RESTARTS   AGE
vxflextest-0   1/1     Running   0          20s
Name:             vxflextest-0
Namespace:        helmtest-vxflexos
Priority:         0
Service Account:  vxflextest
Node:             worker-2-wkc8qnrhy5m5a.domain/10.247.66.179
Start Time:       Tue, 01 Aug 2023 12:26:13 -0400
Labels:           app=vxflextest
                  controller-revision-hash=vxflextest-745fc4547b
                  statefulset.kubernetes.io/pod-name=vxflextest-0
Annotations:      <none>
Status:           Running
IP:               10.244.2.148
IPs:
  IP:           10.244.2.148
Controlled By:  StatefulSet/vxflextest
Containers:
  test:
    Container ID:  containerd://8ffd9be8cbb76448d3d2cd0706b7860b8831e6fb0fb1d88e77160e23ed5fabc3
    Image:         docker.io/centos:latest
    Image ID:      docker.io/library/centos@sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sleep
      3600
    State:          Running
      Started:      Tue, 01 Aug 2023 12:26:19 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data0 from pvol0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9lp8 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  pvol0:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  restorepvc
    ReadOnly:   false
  kube-api-access-q9lp8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  16s   default-scheduler  Successfully assigned helmtest-vxflexos/vxflextest-0 to worker-2-wkc8qnrhy5m5a.domain
  Normal  Pulling    10s   kubelet            Pulling image "docker.io/centos:latest"
  Normal  Pulled     10s   kubelet            Successfully pulled image "docker.io/centos:latest" in 259.618379ms
  Normal  Created    10s   kubelet            Created container test
  Normal  Started    10s   kubelet            Started container test
10.225.109.43:/k8s-a511127e0d   8388608  1582336   6806272  19% /data0
10.225.109.43:/k8s-a511127e0d on /data0 type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.247.66.179,local_lock=none,addr=10.225.109.43)
updating container finished
marking volume
listing /data0
total 8
drwxr-xr-x 2 root root 8192 Aug  1 16:24 lost+found
-rw-r--r-- 1 root root    0 Aug  1 16:26 new
-rw-r--r-- 1 root root    0 Aug  1 16:25 orig
deleting snap
volumesnapshot.snapshot.storage.k8s.io "pvol0-snap1" deleted
deleting container
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config
release "1vol-nfs" uninstalled
NAME           READY   STATUS        RESTARTS   AGE
vxflextest-0   1/1     Terminating   0          64s
deleting...
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                          STORAGECLASS   REASON   AGE     VOLUMEMODE
k8s-1706e1a622   8Gi        RWO            Delete           Released   helmtest-vxflexos/pvol0        vxflexos                176d    Filesystem
k8s-24429c32a2   16Gi       RWO            Delete           Released   helmtest-vxflexos/pvol1        vxflexos-xfs            265d    Filesystem
k8s-3f59afadd9   8Gi        RWO            Delete           Bound      helmtest-vxflexos/restorepvc   vxflexos-nfs            79s     Filesystem
k8s-a511127e0d   8Gi        RWO            Delete           Released   helmtest-vxflexos/pvol0        vxflexos-nfs            2m31s   Filesystem
k8s-b12a05d395   16Gi       RWO            Delete           Released   helmtest-vxflexos/pvol1        vxflexos-xfs            176d    Filesystem
k8s-b52dd704b0   8Gi        RWO            Delete           Released   helmtest-vxflexos/pvol0        vxflexos                265d    Filesystem

Copy link
Contributor

@khareRajshree khareRajshree left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please address the few comments, as well as provide logs for this test as well.
"Verified the functionality by running the restoresnapshot helm test.Functionality is working fine."


if len(listSnaps) > 0 {
return nil, status.Errorf(codes.FailedPrecondition,
"unable to delete FS volume -- snapshots based on this volume still exist: %v",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FS --> NFS

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

And I call CreateSnapshot NFS "snap1"
And no error was received
And I call DeleteVolume nfs with "single-writer"
Then the error contains "unable to delete FS volume -- snapshots based on this volume still exist"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FS --> NFS

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@VamsiSiddu-7
Copy link
Contributor Author

Please address the few comments, as well as provide logs for this test as well. "Verified the functionality by running the restoresnapshot helm test.Functionality is working fine."

@khareRajshree Added the logs and addressed your review comments. Please review it again.

@VamsiSiddu-7 VamsiSiddu-7 merged commit 8107ef1 into main Aug 4, 2023
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants