Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephfs: fix omap deletion in DeleteSnapshot #2844

Merged
merged 1 commit into from Feb 8, 2022
Merged

Conversation

Madhu-1
Copy link
Collaborator

@Madhu-1 Madhu-1 commented Feb 1, 2022

the omap is stored with the requested snapshot name not with the subvolume snapshotname. This fix uses the correct
snapshot request name to cleanup the omap once the subvolume snapshot is deleted.

fixes: #2832

Signed-off-by: Madhu Rajanna madhupr007@gmail.com

After Fix

[🎩︎]mrajanna@fedora cephfs $]kubectl create -f snapshot.yaml 
volumesnapshot.snapshot.storage.k8s.io/cephfs-pvc-snapshot created
[🎩︎]mrajanna@fedora cephfs $]kubectl get volumesnapshot
NAME                  READYTOUSE   SOURCEPVC    SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS                SNAPSHOTCONTENT                                    CREATIONTIME   AGE
cephfs-pvc-snapshot   true         cephfs-pvc                           100Gi         csi-cephfsplugin-snapclass   snapcontent-19d00681-5e17-4616-8360-73422ff5427e   4s             4s
[🎩︎]mrajanna@fedora cephfs $]
[🎩︎]mrajanna@fedora cephfs $]kubectl delete -f snapshot.yaml 
volumesnapshot.snapshot.storage.k8s.io "cephfs-pvc-snapshot" deleted
I0201 05:42:06.484623       1 utils.go:191] ID: 18 Req-ID: snapshot-19d00681-5e17-4616-8360-73422ff5427e GRPC call: /csi.v1.Controller/CreateSnapshot
I0201 05:42:06.484768       1 utils.go:195] ID: 18 Req-ID: snapshot-19d00681-5e17-4616-8360-73422ff5427e GRPC request: {"name":"snapshot-19d00681-5e17-4616-8360-73422ff5427e","parameters":{"clusterID":"rook-ceph"},"secrets":"***stripped***","source_volume_id":"0001-0009-rook-ceph-0000000000000001-2538aeb8-8312-11ec-921d-021bb3f04b38"}
I0201 05:42:06.497135       1 omap.go:87] ID: 18 Req-ID: snapshot-19d00681-5e17-4616-8360-73422ff5427e got omap values: (pool="myfs-metadata", namespace="csi", name="csi.volume.2538aeb8-8312-11ec-921d-021bb3f04b38"): map[csi.imagename:csi-vol-2538aeb8-8312-11ec-921d-021bb3f04b38 csi.volname:pvc-0e32af29-a511-4161-8519-f39dcb3c4ed8]
E0201 05:42:06.518693       1 omap.go:78] ID: 18 Req-ID: snapshot-19d00681-5e17-4616-8360-73422ff5427e omap not found (pool="myfs-metadata", namespace="csi", name="csi.snaps.default"): rados: ret=-2, No such file or directory
I0201 05:42:06.532016       1 omap.go:155] ID: 18 Req-ID: snapshot-19d00681-5e17-4616-8360-73422ff5427e set omap keys (pool="myfs-metadata", namespace="csi", name="csi.snaps.default"): map[csi.snap.snapshot-19d00681-5e17-4616-8360-73422ff5427e:b0d01e34-8321-11ec-8d81-fa47ecc4076d])
I0201 05:42:06.537911       1 omap.go:155] ID: 18 Req-ID: snapshot-19d00681-5e17-4616-8360-73422ff5427e set omap keys (pool="myfs-metadata", namespace="csi", name="csi.snap.b0d01e34-8321-11ec-8d81-fa47ecc4076d"): map[csi.imagename:csi-snap-b0d01e34-8321-11ec-8d81-fa47ecc4076d csi.snapname:snapshot-19d00681-5e17-4616-8360-73422ff5427e csi.source:csi-vol-2538aeb8-8312-11ec-921d-021bb3f04b38])
I0201 05:42:06.537944       1 fsjournal.go:333] ID: 18 Req-ID: snapshot-19d00681-5e17-4616-8360-73422ff5427e Generated Snapshot ID (0001-0009-rook-ceph-0000000000000001-b0d01e34-8321-11ec-8d81-fa47ecc4076d) for request name (snapshot-19d00681-5e17-4616-8360-73422ff5427e)
I0201 05:42:06.566217       1 utils.go:202] ID: 18 Req-ID: snapshot-19d00681-5e17-4616-8360-73422ff5427e GRPC response: {"snapshot":{"creation_time":{"nanos":542355000,"seconds":1643694126},"ready_to_use":true,"size_bytes":107374182400,"snapshot_id":"0001-0009-rook-ceph-0000000000000001-b0d01e34-8321-11ec-8d81-fa47ecc4076d","source_volume_id":"0001-0009-rook-ceph-0000000000000001-2538aeb8-8312-11ec-921d-021bb3f04b38"}}
I0201 05:42:21.114564       1 utils.go:191] ID: 19 Req-ID: 0001-0009-rook-ceph-0000000000000001-b0d01e34-8321-11ec-8d81-fa47ecc4076d GRPC call: /csi.v1.Controller/DeleteSnapshot
I0201 05:42:21.114688       1 utils.go:195] ID: 19 Req-ID: 0001-0009-rook-ceph-0000000000000001-b0d01e34-8321-11ec-8d81-fa47ecc4076d GRPC request: {"secrets":"***stripped***","snapshot_id":"0001-0009-rook-ceph-0000000000000001-b0d01e34-8321-11ec-8d81-fa47ecc4076d"}
I0201 05:42:21.117790       1 omap.go:87] ID: 19 Req-ID: 0001-0009-rook-ceph-0000000000000001-b0d01e34-8321-11ec-8d81-fa47ecc4076d got omap values: (pool="myfs-metadata", namespace="csi", name="csi.snap.b0d01e34-8321-11ec-8d81-fa47ecc4076d"): map[csi.imagename:csi-snap-b0d01e34-8321-11ec-8d81-fa47ecc4076d csi.snapname:snapshot-19d00681-5e17-4616-8360-73422ff5427e csi.source:csi-vol-2538aeb8-8312-11ec-921d-021bb3f04b38]
I0201 05:42:21.163948       1 omap.go:123] ID: 19 Req-ID: 0001-0009-rook-ceph-0000000000000001-b0d01e34-8321-11ec-8d81-fa47ecc4076d removed omap keys (pool="myfs-metadata", namespace="csi", name="csi.snaps.default"): [csi.snap.snapshot-19d00681-5e17-4616-8360-73422ff5427e]
I0201 05:42:21.164023       1 utils.go:202] ID: 19 Req-ID: 0001-0009-rook-ceph-0000000000000001-b0d01e34-8321-11ec-8d81-fa47ecc4076d GRPC response: {}
sh-4.4$ rados listomapkeys csi.snaps.default --pool=myfs-metadata --namespace=csi
csi.snap.snapshot-67172ca0-7e32-43a0-b2dd-7384ea865e10
sh-4.4$ 
sh-4.4$ rados listomapvals csi.snaps.default --pool=myfs-metadata --namespace=csi
csi.snap.snapshot-67172ca0-7e32-43a0-b2dd-7384ea865e10
value (36 bytes) :
00000000  35 62 66 66 34 64 35 33  2d 38 33 32 32 2d 31 31  |5bff4d53-8322-11|
00000010  65 63 2d 38 64 38 31 2d  66 61 34 37 65 63 63 34  |ec-8d81-fa47ecc4|
00000020  30 37 36 64                                       |076d|
00000024

sh-4.4$ rados listomapvals csi.snap.5bff4d53-8322-11ec-8d81-fa47ecc4076d --pool=myfs-metadata --namespace=csi
csi.imagename
value (45 bytes) :
00000000  63 73 69 2d 73 6e 61 70  2d 35 62 66 66 34 64 35  |csi-snap-5bff4d5|
00000010  33 2d 38 33 32 32 2d 31  31 65 63 2d 38 64 38 31  |3-8322-11ec-8d81|
00000020  2d 66 61 34 37 65 63 63  34 30 37 36 64           |-fa47ecc4076d|
0000002d

csi.snapname
value (45 bytes) :
00000000  73 6e 61 70 73 68 6f 74  2d 36 37 31 37 32 63 61  |snapshot-67172ca|
00000010  30 2d 37 65 33 32 2d 34  33 61 30 2d 62 32 64 64  |0-7e32-43a0-b2dd|
00000020  2d 37 33 38 34 65 61 38  36 35 65 31 30           |-7384ea865e10|
0000002d

csi.source
value (44 bytes) :
00000000  63 73 69 2d 76 6f 6c 2d  32 35 33 38 61 65 62 38  |csi-vol-2538aeb8|
00000010  2d 38 33 31 32 2d 31 31  65 63 2d 39 32 31 64 2d  |-8312-11ec-921d-|
00000020  30 32 31 62 62 33 66 30  34 62 33 38              |021bb3f04b38|
0000002c

sh-4.4$ rados listomapvals csi.snap.5bff4d53-8322-11ec-8d81-fa47ecc4076d --pool=myfs-metadata --namespace=csi
error getting omap keys myfs-metadata/csi.snap.5bff4d53-8322-11ec-8d81-fa47ecc4076d: (2) No such file or directory
sh-4.4$ 
sh-4.4$ rados listomapvals csi.snaps.default --pool=myfs-metadata --namespace=csi
sh-4.4$ 

@mergify mergify bot added component/cephfs Issues related to CephFS bug Something isn't working labels Feb 1, 2022
@Madhu-1 Madhu-1 requested review from a team February 1, 2022 05:50
@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Feb 7, 2022

@Mergifyio rebase

@mergify
Copy link
Contributor

mergify bot commented Feb 7, 2022

rebase

✅ Branch has been successfully rebased

@Rakshith-R
Copy link
Contributor

@Mergifyio rebase

@mergify
Copy link
Contributor

mergify bot commented Feb 7, 2022

rebase

✅ Branch has been successfully rebased

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Feb 8, 2022

@Mergifyio rebase

@mergify
Copy link
Contributor

mergify bot commented Feb 8, 2022

rebase

✅ Branch has been successfully rebased

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Feb 8, 2022

@Mergifyio rebase

@mergify
Copy link
Contributor

mergify bot commented Feb 8, 2022

rebase

☑️ Nothing to do

  • -closed [📌 rebase requirement]
  • #commits-behind>0 [📌 rebase requirement]

the omap is stored with the requested
snapshot name not with the subvolume
snapshotname. This fix uses the correct
snapshot request name to cleanup the omap
once the subvolume snapshot is deleted.

fixes: ceph#2832

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component/cephfs Issues related to CephFS
Projects
None yet
Development

Successfully merging this pull request may close these issues.

cephfs: snapshot omap leak after deletesnapshot
4 participants