Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcdctl: fix member add (again...) #11638

Merged
merged 1 commit into from Feb 29, 2020

Conversation

jingyih
Copy link
Contributor

@jingyih jingyih commented Feb 19, 2020

I just realized we could simply use the members information in member add response, which is guaranteed to be up to date.

We tried to fix etcdctl member add in #11194. But it does not solve the problem if the client balancer is provided with 2 endpoints and balancer is doing round robin.

Fixes #11554

Use members information from member add response, which is
guaranteed to be up to date.
@jingyih
Copy link
Contributor Author

jingyih commented Feb 19, 2020

cc @gyuho @jpbetz

@jingyih jingyih changed the title etcdctl: fix member add etcdctl: fix member add (again...) Feb 19, 2020
@jpbetz
Copy link
Contributor

jpbetz commented Feb 19, 2020

LGTM

@jingyih jingyih merged commit d774916 into etcd-io:master Feb 29, 2020
@jingyih jingyih deleted the fix_etcdctl_member_list branch February 29, 2020 15:08
spzala added a commit that referenced this pull request Mar 11, 2020
…8-upstream-release-3.3

Automated cherry pick of #11638 on release-3.3
spzala added a commit that referenced this pull request Mar 11, 2020
spzala added a commit that referenced this pull request Mar 11, 2020
…8-upstream-release-3.4

Automated cherry pick of #11638 on release-3.4
@eselvam
Copy link

eselvam commented Jul 15, 2020

{"level":"warn","ts":"2020-07-15T05:45:40.089+0100","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://ipmasked:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}

It is happening when we add stacked kubernetes masters based on instruction at kubernetes.io. It is a second master node.

The etcdctl member status and list are showing correctly with node 1 as master and node 2 as false however when we down the master 1, the entire cluster is going downe.

Kubernetes version 1.18.5 and etcd version: 3.4.3

+------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| master1:2379 | 16cf629ee72c2590 | 3.4.3 | 3.2 MB | false | false | 6 | 3637096 | 3637096 | |
| master2:2379 | 448a38484560a13c | 3.4.3 | 3.2 MB | true | false | 6 | 3637096 | 3637096 | |
+------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

@jingyih
Copy link
Contributor Author

jingyih commented Jul 15, 2020

@eselvam I am not sure I follow you comment. Are you trying to add a new node to a 2-node cluster with 1 of the nodes down?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Member Add Failure
3 participants