Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deleting CephNFS CRD doesn't delete exports #10174

Open
tahsinrahman opened this issue Apr 27, 2022 · 5 comments
Open

Deleting CephNFS CRD doesn't delete exports #10174

tahsinrahman opened this issue Apr 27, 2022 · 5 comments
Assignees

Comments

@tahsinrahman
Copy link

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior:
Deleting a CephNFS CRD doesn't delete NFS exports

Expected behavior:
Deleting a CephNFS CRD should delete relevant exports

How to reproduce it (minimal and precise):

apiVersion: ceph.rook.io/v1
kind: CephNFS
metadata:
  name: test-ceph-nfs
  namespace: rook-ceph
spec:
  rados:
    namespace: nfs-ns
    pool: myfs-replicated
  server:
    active: 1
$ ceph nfs cluster ls
test-ceph-nfs

$ ceph fs subvolume create myfs test-subvolume 10737418240 --namespace-isolated
$ ceph fs subvolume ls myfs
[
    {
        "name": "test-subvolume"
    }
]
$ ceph fs subvolume getpath myfs test-subvolume
/volumes/_nogroup/test-subvolume/67e0bf91-aea1-422a-8205-f1db85c0c075

$ ceph nfs export create cephfs test-ceph-nfs /test myfs /volumes/_nogroup/test-subvolume/67e0bf91-aea1-422a-8205-f1db85c0c075

$ ceph nfs export ls test-ceph-nfs
[
  "/test"
]

Then delete the CephNFS crd

$ k delete cephnfs test-ceph-nfs
cephnfs.ceph.rook.io "test-ceph-nfs" deleted

verify the ceph-nfs cluster is deleted

$ ceph nfs cluster ls

apply the CephNFS CRD again, and see the previous export exists

$ ceph nfs cluster ls
test-ceph-nfs
$ ceph nfs export ls test-ceph-nfs
[
  "/test"
]

File(s) to submit:

  • Cluster CR (custom resource), typically called cluster.yaml, if necessary
  • Operator's logs, if necessary
  • Crashing pod(s) logs, if necessary

To get logs, use kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use the insert code button from the Github UI.
Read GitHub documentation if you need help.

Environment:

  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Cloud provider or hardware configuration:
  • Rook version (use rook version inside of a Rook Pod): 1.8.8
  • Storage backend version (e.g. for ceph do ceph -v): 16.2.7
  • Kubernetes version (use kubectl version):
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):
@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@BlaineEXE BlaineEXE removed the wontfix label Jun 28, 2022
@BlaineEXE BlaineEXE self-assigned this Jun 28, 2022
@BlaineEXE
Copy link
Member

This issue has been on my radar. Not stale. My current thinking has been to add a config similar to the CephFilesystem's preserveFilesystemOnDelete. By default, when deleting the CephNFS, all exports and configs should be removed. But if preserveFilesystemOnDelete is set, then the current behavior (don't delete exports or configs) should be followed.

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions
Copy link

github-actions bot commented Sep 3, 2022

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@BlaineEXE
Copy link
Member

Reopening. Just not the highest priority item w/ other nfs feature work

@BlaineEXE BlaineEXE reopened this Sep 13, 2022
@travisn travisn removed the keepalive label Mar 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants