-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Error writing node IP when join node, The status has been constantly in Upgrading #648
Comments
Thanks for your feedback. Are all cluster nodes CentOS 7.9 or only the newly added worker node is using CentOS 7.9? |
vps-regtech.log |
Is the joining node action stuck at the last line of the log you provide? |
Q1: Yes |
Sometimes the native provider can't catch the join error correctly. When this situation happens, the cluster's status will be Upgrade forever. |
Okay, I'll rebuild the cluster. Thank you for your answer. |
@xuzheng0017 There's no need to rebuild the cluster. The K3s cluster won't impact by the AutoK3s cluster status. |
Okay, but I want to join other nodes to the page without any options. |
The workaround below may help you:
Once the join process is complete, the cluster status will be refreshed to Running and the UI can work properly. |
The bug is relative to the wrong catch of error in defer function. Will fix this in the next version. |
I have encountered another problem: 81d5d17a77de:/home/shell # autok3s join -p native --name vps-cargogo --ip xx.xx.xx.xx --ssh-user root --ssh-key-path /root/.autok3s/vps-cargogo/id_rsa --worker-ips xx.xx.xx.xx
time="2023-12-06T14:53:03+08:00" level=info msg="[native] begin to join nodes for vps-cargogo..."
time="2023-12-06T14:53:03+08:00" level=info msg="[native] executing join k3s node logic"
time="2023-12-06T14:53:03+08:00" level=info msg="[native] successfully executed join k3s node logic"
time="2023-12-06T14:53:03+08:00" level=info msg="[native] successfully executed join logic" |
Can only use commands on nodes to rejoin? |
Yes. AutoK3s can't synchronize your operation because the node was removed manually and didn't synchronize the AutoK3s database. So you can't rejoin the node by AutoK3s because the node is already in the cluster by AutoK3s side. |
tested with v0.9.2-rc1. AutoK3s can return the correct status of the cluster if join nodes fail. |
Describe the bug
Error writing node IP when join node, The status has been constantly in Upgrading.
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
Environments (please complete the following information):
Additional context
The text was updated successfully, but these errors were encountered: