You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
rke version v0.0.7-dev
Steps to reproduce the problem:
Create a kubetnetes setup with following configuration:
node1 - etcd , controlplane
node 2 - worker
node 3 - controlplane.
Modify node1's role to be only "etcd" (by removing controlplane node).
This results in k8s cluster broken.
Controlplane tear down results in etcd data dir being removed as well resulting in broken cluster.
The text was updated successfully, but these errors were encountered:
rke version v0.0.7-dev
Steps to reproduce the problem:
Create a kubernetes setup with following configuration:
node1 - etcd , controlplane
node2 ,node3 - worker
node4, node5 - controlplane.
Update node1's role to be only "etcd".
node1 gets removed from the k8s cluster as master node.
etcd container on this node continues to be present.
K8s cluster is healthy . Existing pods continue to work and we are able to create new pods.
rke version v0.0.7-dev
Steps to reproduce the problem:
Create a kubetnetes setup with following configuration:
node1 - etcd , controlplane
node 2 - worker
node 3 - controlplane.
Modify node1's role to be only "etcd" (by removing controlplane node).
This results in k8s cluster broken.
Controlplane tear down results in etcd data dir being removed as well resulting in broken cluster.
The text was updated successfully, but these errors were encountered: