Skip to content

after oc cluster down openshift.local.volumes remain which cannot be deleted #21338

@mandibuswell

Description

@mandibuswell

I am using the 3.11 versions from Red Hat using oc cluster up. After doing an oc cluster down and trying to remove all data so I can do a fresh start I am unable to remove a bunch of items:

> [root@ip-XXXXX ec2-user]# rm -Rf persistence2/
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c839c0a-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c839c0a-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-service-cert-signer-operator-token-rhkhc’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c910ae8-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/kube-dns-token-xt8ws’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c90dc7d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/kube-proxy-token-sdtxb’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c93954d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-apiserver-token-z5nd4’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c93954d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/signing-key’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/service-serving-cert-signer-sa-token-zh86w’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/427ae244-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/apiservice-cabundle-injector-sa-token-nxs4c’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/427ae244-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
> rm: cannot remove ‘persistence2/openshift.local.volumes/pods/6282df44-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-controller-manager-token-2spvq’: Device or resource busy

The only way to remove this is to restart the VM, then prior to starting openshift again, delete the files.

[root@ip-XXXXX ec2-user]# oc version
oc v3.11.23
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://127.0.0.1:8443
kubernetes v1.11.0+d4cacc0
Version

[provide output of the openshift version or oc version command]

Steps To Reproduce

1. oc cluster up --base-dir='/home/ec2-user/persistence2/' --skip-registry-check=true --public-hostname="ec2-xxxx.ap-southeast-2.compute.amazonaws.com" --routing-suffix="xxxxx.nip.io"
2. after cluster is up and running, oc cluster down
3. run rm -Rf /home/ec2-user/persistence2/

Current Result
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c839c0a-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c839c0a-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-service-cert-signer-operator-token-rhkhc’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c910ae8-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/kube-dns-token-xt8ws’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c90dc7d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/kube-proxy-token-sdtxb’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c93954d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-apiserver-token-z5nd4’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/3c93954d-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/signing-key’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/412ba087-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/service-serving-cert-signer-sa-token-zh86w’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/427ae244-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/apiservice-cabundle-injector-sa-token-nxs4c’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/427ae244-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/serving-cert’: Device or resource busy
rm: cannot remove ‘persistence2/openshift.local.volumes/pods/6282df44-d6a3-11e8-9c7e-02c8661c23be/volumes/kubernetes.io~secret/openshift-controller-manager-token-2spvq’: Device or resource busy
Expected Result

All files under the directory can be deleted.

Additional Information

[try to run $ oc adm diagnostics (or oadm diagnostics) command if possible]
Diagnostics below are run after a fresh start.

[root@ip-XXXXXX ec2-user]# oc adm diagnostics
[Note] Determining if client configuration exists for client/cluster diagnostics
Info:  Successfully read a client config file at '/root/.kube/config'
[Note] Could not configure a client with cluster-admin permissions for the current server, so cluster diagnostics will be skipped

[Note] Running diagnostic: ConfigContexts[rhpam71-install-developer/127-0-0-1:8443/developer]
       Description: Validate client config context is complete and has connectivity
       
Info:  For client config context 'rhpam71-install-developer/127-0-0-1:8443/developer':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'developer/127-0-0-1:8443'
       The current project is 'rhpam71-install-developer'
       Successfully requested project list; has access to project(s):
         [myproject]
       
[Note] Running diagnostic: ConfigContexts[openshift-web-console/127-0-0-1:8443/system:admin]
       Description: Validate client config context is complete and has connectivity
       
Info:  For client config context 'openshift-web-console/127-0-0-1:8443/system:admin':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'system:admin/127-0-0-1:8443'
       The current project is 'openshift-web-console'
       Successfully requested project list; has access to project(s):
         [default kube-dns kube-proxy kube-public kube-system myproject openshift openshift-apiserver openshift-controller-manager openshift-core-operators ...]
       
[Note] Running diagnostic: ConfigContexts[default/ec2-XXXXXX-ap-southeast-2-compute-amazonaws-com:8443/system:admin]
       Description: Validate client config context is complete and has connectivity
       
Info:  For client config context 'default/ec2-XXXXXX-ap-southeast-2-compute-amazonaws-com:8443/system:admin':
       The server URL is 'https://ec2-XXXXXX.ap-southeast-2.compute.amazonaws.com:8443'
       The user authentication is 'system:admin/127-0-0-1:8443'
       The current project is 'default'
       Successfully requested project list; has access to project(s):
         [default kube-dns kube-proxy kube-public kube-system myproject openshift openshift-apiserver openshift-controller-manager openshift-core-operators ...]
       
[Note] Running diagnostic: DiagnosticPod
       Description: Create a pod to run diagnostics from the application standpoint
       
WARN:  [DCli2006 from diagnostic DiagnosticPod@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/client/pod/run_diagnostics_pod.go:187]
       Timed out preparing diagnostic pod logs for streaming, so this diagnostic cannot run.
       It is likely that the image 'registry.redhat.io/openshift3/ose-deployer:v3.11.23' was not pulled and running yet.
       Last error: (*errors.StatusError[2]) container "pod-diagnostics" in pod "pod-diagnostic-test-fnnlb" is waiting to start: image can't be pulled: 
       
[Note] Summary of diagnostics execution (version v3.11.23):
[Note] Warnings seen: 1

NOTE:
[consider attaching output of the $ oc get all -o json -n <namespace> command to the issue]
[visit https://docs.openshift.org/latest/welcome/index.html]

Metadata

Metadata

Assignees

No one assigned

    Labels

    lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions