-
Hi Team, Setting up Rook with Ceph RBD storage on K8s was quite simple! Thanks for building such an awesome experience. I had a question on how to use Rook with K8s environments which perform rolling upgrades of the nodes. In my testing, provisioning a new cluster e.g: 1 CP/3 workers with Rook works great. However, when I upgrade K8s (e.g: 1.18.17 -> 1.19.9), as new nodes join the cluster, existing nodes are evicted and eventually deleted. The MON and OSD pods for the older nodes are not removed, and they're forever stuck in a pending state because the nodeSelector is still targetting the old node. I don't think this is a typical use-case - but is this supported by rook? Are there any other specific configuration other than |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 6 replies
-
@saamalik If you're installing in an environment where nodes are often recycled, you will want to create a cluster based on the cluster-on-pvc.yaml example. This will allow the underlying storage to not be dependent on the local node, and allow the mon and osd daemons to move to new nodes. |
Beta Was this translation helpful? Give feedback.
-
We have the use case that we are using cluster-api on a baremetal environment. How could we achieve a similiar result? From my point of view there are three possibilities:
|
Beta Was this translation helpful? Give feedback.
@saamalik If you're installing in an environment where nodes are often recycled, you will want to create a cluster based on the cluster-on-pvc.yaml example. This will allow the underlying storage to not be dependent on the local node, and allow the mon and osd daemons to move to new nodes.