Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Embedded High Availability cluster isn't available if original node is killed #1798

Closed
FlexibleToast opened this issue May 14, 2020 · 1 comment

Comments

@FlexibleToast
Copy link

FlexibleToast commented May 14, 2020

Version:

k3s version v1.18.2+k3s1 (698e444)

K3s arguments:

First master:

curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server --cluster-init" sh -

Remaining master nodes (2 for 3 total master nodes):

curl -sfL https://get.k3s.io | \
K3S_URL=https://<ip address of initial master >:6443 \
K3S_TOKEN=< token from initial master > \
INSTALL_K3S_EXEC="server --server https://<ip address of initial master >:6443" \
sh -

Finally one worker:

curl -sfL https://get.k3s.io | \
K3S_URL=https://<ip address of initial master >:6443 \
K3S_TOKEN=< token from initial master > \
sh -

Describe the bug

After everything is installed if I run a kubectl get nodes I get the expected return of the 4 nodes with three of them having the role master. However, if I shutdown the initial node, kubectl will from then on fail
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)

To Reproduce

Redeploy using same commands (I'm using Ansible playbooks so I know the commands are exactly the same).

Expected behavior

Another master node takes over the role and when kubectl get nodes is run on it, it shows the 4 nodes with the original master as "notready"

Actual behavior

Receive error of timeout from kubectl on next master node when first master node is down

Additional context / logs

@FlexibleToast
Copy link
Author

Seems to be related to this issue upon further testing #1391. Closing duplicate issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant