You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've update my rook (as described in #14116) from v1.10.11 to v1.13.8 step by step (v1.10.11 -> v1.11.11 -> v1.12.11 -> v1.13.8). Now I have csi-rbdplugin-provisioners only on nodes which hold OSDs. All other nodes have no csi-rbdplugin-provisioner pod/deployment/... I've restarted the operator several times but no it had no effect.
Environment:
OS (e.g. from /etc/os-release): Ubuntu 20.04.6 LTS (Focal Fossa)
Kernel (e.g. uname -a): 5.15.0-105-generic
Cloud provider or hardware configuration:
Rook version (use rook version inside of a Rook Pod): v1.13.8
Storage backend version (e.g. for ceph do ceph -v): ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
Kubernetes version (use kubectl version): v1.29.2
Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox): HEALTH_OK
The text was updated successfully, but these errors were encountered:
I've update my rook (as described in #14116) from v1.10.11 to v1.13.8 step by step (v1.10.11 -> v1.11.11 -> v1.12.11 -> v1.13.8). Now I have
csi-rbdplugin-provisioner
s only on nodes which hold OSDs. All other nodes have nocsi-rbdplugin-provisioner
pod/deployment/... I've restarted the operator several times but no it had no effect.Environment:
Ubuntu 20.04.6 LTS (Focal Fossa)
uname -a
):5.15.0-105-generic
rook version
inside of a Rook Pod):v1.13.8
ceph -v
):ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
kubectl version
):v1.29.2
ceph health
in the Rook Ceph toolbox):HEALTH_OK
The text was updated successfully, but these errors were encountered: