Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: configuring cephfs snapshots and clones #3742

Merged
merged 1 commit into from
Apr 13, 2023
Merged

fix: configuring cephfs snapshots and clones #3742

merged 1 commit into from
Apr 13, 2023

Conversation

riya-singhal31
Copy link
Contributor

this commit updates the readme with the instructions on how to configure snapshots and clones of cephfs pvc.

Fixes: #3691

@mergify mergify bot added the bug Something isn't working label Apr 11, 2023
Comment on lines 344 to 345
To be sure everything is OK you can run `cephfs snap ls [your-pvc-name]` inside
one of your Ceph pod.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not correct, please check again.

Comment on lines +347 to +353
To restore the snapshot to a new PVC, deploy
[pvc-restore.yaml](./cephfs/pvc-restore.yaml) and a testing pod
[pod-restore.yaml](./cephfs/pod-restore.yaml):

```bash
kubectl create -f pvc-restore.yaml
kubectl create -f pod-restore.yaml
```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Provide a cleanup step as well.

[snapshotclass.yaml](./cephfs/snapshotclass.yaml) and
[snapshot.yaml](./cephfs/snapshot.yaml).

Once you created your Cephfs volume, you'll need to customize at least
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cephfs to CephFS

[snapshot.yaml](./cephfs/snapshot.yaml).

Once you created your Cephfs volume, you'll need to customize at least
`snapshotclass.yaml` and make sure the `clusterId` parameter matches
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clusterId should be clusterID?

Comment on lines 340 to 341
NAME AGE
cephfs-pvc-snapshot 6s
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think it will show more output can you please check?

$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc Bound pvc-1ea51547-a88b-4ab0-8b4a-812caeaf025d 1Gi RWX csi-cephfs-sc 20h
cephfs-pvc-clone Bound pvc-b575bc35-d521-4c41-b4f9-1d733cd28fdf 1Gi RWX csi-cephfs-sc 39s
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

provide cleanup step at last?

@@ -296,3 +296,76 @@ Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes
```
### How to create Cephfs Snapshots
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cephfs to CephFS, you are also talking about restore and PVC clone can you reword this one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For clone. provided a separate heading.

Comment on lines 301 to 302
Before continuing, make sure you enabled the required
feature gate `VolumeSnapshotDataSource=true` in your Kubernetes cluster.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is no more required on the supported kubernetes version and we can skip it?

@riya-singhal31
Copy link
Contributor Author

Thanks for the review @Madhu-1, updated the PR with suggested changes.

@riya-singhal31
Copy link
Contributor Author

@Rakshith-R Could you please review.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Apr 13, 2023

@Mergifyio rebase

this commit updates the readme with the instructions on
how to configure snapshots and clones of cephfs pvc

Signed-off-by: riya-singhal31 <rsinghal@redhat.com>
@mergify
Copy link
Contributor

mergify bot commented Apr 13, 2023

rebase

✅ Branch has been successfully rebased

@Madhu-1 Madhu-1 added component/docs Issues and PRs related to documentation ready-to-merge This PR is ready to be merged and it doesn't need second review (backports only) ci/skip/e2e skip running e2e CI jobs ci/skip/multi-arch-build skip building on multiple architectures ok-to-test Label to trigger E2E tests and removed bug Something isn't working labels Apr 13, 2023
@github-actions
Copy link

/test ci/centos/k8s-e2e-external-storage/1.23

@mergify mergify bot added the bug Something isn't working label Apr 13, 2023
@github-actions
Copy link

/test ci/centos/k8s-e2e-external-storage/1.24

@github-actions
Copy link

/test ci/centos/k8s-e2e-external-storage/1.25

@github-actions
Copy link

/test ci/centos/k8s-e2e-external-storage/1.26

@github-actions
Copy link

/test ci/centos/mini-e2e-helm/k8s-1.23

@github-actions
Copy link

/test ci/centos/mini-e2e-helm/k8s-1.24

@github-actions
Copy link

/test ci/centos/mini-e2e-helm/k8s-1.25

@github-actions
Copy link

/test ci/centos/mini-e2e-helm/k8s-1.26

@github-actions
Copy link

/test ci/centos/mini-e2e/k8s-1.23

@github-actions
Copy link

/test ci/centos/mini-e2e/k8s-1.24

@github-actions
Copy link

/test ci/centos/mini-e2e/k8s-1.25

@github-actions
Copy link

/test ci/centos/mini-e2e/k8s-1.26

@github-actions
Copy link

/test ci/centos/upgrade-tests-cephfs

@github-actions
Copy link

/test ci/centos/upgrade-tests-rbd

@github-actions github-actions bot removed the ok-to-test Label to trigger E2E tests label Apr 13, 2023
@Madhu-1 Madhu-1 requested a review from a team April 13, 2023 15:15
@mergify mergify bot merged commit 4a3550d into ceph:devel Apr 13, 2023
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ci/skip/e2e skip running e2e CI jobs ci/skip/multi-arch-build skip building on multiple architectures component/docs Issues and PRs related to documentation ready-to-merge This PR is ready to be merged and it doesn't need second review (backports only)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

How can I configure a snapshot cephFS PVC through the helm chart?
3 participants