-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding Worker Node to existing cluster using Rancher #7890
Comments
Did you customize the cluster configuration somehow, to add this node to the agent configuration? The system-default-registry flag is only valid for servers, not for agents, but it appears for some reason this flag is being passed to your agent nodes. |
No, I ran the following command as generated by the registration page:
I just removed the domains and actual token but that is a copy/paste nonetheless. Under 'Registries' in the Cluster Configuration I do have a custom registry. |
Did you customize anything else when creating the cluster? I'm not sure why Rancher would be attempting to pass the system-default-registry flag to an agent. I've not seen this error before, but you might try upgrading Rancher to 2.7.5 on the off chance this is something that has already been fixed? |
No, everything else is as it is. They all use private IP's but aside from that nothing else was customized. I can try upgrading to 2.7.5 and get back to you. |
Yeah, see if that helps. If not I suspect this is going to need to be opened against rancher/rancher, since that is responsible for setting the correct configuration for the servers and agents. I know that private registry is pretty widely used though, and I've never seen it try to pass that arg to agents, so I would be pretty surprised if this is indeed an unresolved issue in current releases. |
Update, when I upgraded to the latest version, it subsequently went to all of the nodes and upgraded them all to the same version (as it should). During that process, it grabbed the node that was having issues, upgraded it, then it connected just fine as a worker. I am going to be adding a few more nodes in the coming days so if I see it again I will update this. |
@textgroove-steven Is everything looking good on your end? If so, I will go ahead and close this issue. |
I was able to add more nodes without issues. I will keep this bookmarked and update it if I see it again and provide more logs. |
Environmental Info:
K3s Version: v1.23.17+k3s
Node(s) CPU architecture, OS, and Version:
Linux worker-4 5.15.0-76-generic #83-Ubuntu SMP Thu Jun 15 19:16:32 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
4 Total servers, 3 All, 1 Worker
Describe the bug:
When adding a worker node, it does not join. I took a look through the logs and I am seeing the following error
Incorrect Usage: flag provided but not defined: -system-default-registry
. Not sure how to fix this/proceed.Steps To Reproduce:
Expected behavior:
Expected worker node to join the cluster
Actual behavior:
Node is not joining with the error
configuring worker node(s) custom-0727771e1c0e: error applying plan -- check rancher-system-agent.service logs on node for more information, waiting for agent to check in and apply initial plan
Additional context / logs:
The cluster was originally created from the Rancher UI, V2.7.3. The original cluster was 3 primary nodes all working just fine. I am trying to add more worker nodes. It keeps trying to boot in a loop with the error
Incorrect Usage: flag provided but not defined: -system-default-registry
The text was updated successfully, but these errors were encountered: