-
Notifications
You must be signed in to change notification settings - Fork 221
unable to scale self-hosted etcd #346
Comments
@janwillies How many nodes do you have in your kubernetes cluster? can you get the log from the etcd operator via kubectl log? |
It's not patching.. You are overwriting self hosted etcd spec. |
I have only two nodes, but this shouldn't matter because it fails already when starting the second etcd-node.
etcd-pod:
@hongchaodeng what do you mean with "not patching"? What else should I use to scale the etcd-cluster? |
Hi @janwillies . Sure, let me explain further. First of all, this is changing the cluster spec:
What I would recommend you to do, you need to do a reconciliation loop:
Two more notes:
|
it is unrelated to your current failure, what hongchao suggested is the root cause. But please note that two self hosted etcd members cannot run on the same physical node. so you need at least 3 nodes to scale up to 3. |
cool, it's working now:
appreciate the help @xiang90 and @hongchaodeng! |
I'm unable to scale the etcd cluster which I brought up with bootkube.
Bootkube version: master from today with @ericchiang rbac PR
Platform: Ubuntu 16.04.2 LTS
Then I joined a second master node and tried scaling the etcd cluster:
The kubernetes cluster becomes unavailable and I see this repeating in the etcd container:
It's looking for the internal DNS name on the host, probably because the etcd cluster runs in the host-network namespace
cc @hongchaodeng and @xiang90
The text was updated successfully, but these errors were encountered: