-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial setup of a k8s cluster with kubespray breaks if kube-vip is enabled #11229
Comments
Same for me. On a fresh cluster deployment if kube-vip it's enabled the deployment fails.
This are the logs from the kube-vip container:
And this from the journaclt:
I redacted the domain with example.com Workaround:
|
Same issue for me. |
kube-vip requires workarounds to support k8s v1.29+ |
I would be great to add kube-vip to the test matrix also ... |
Fixes: kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
Proposed PR: #11242 |
See kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
See kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
See kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
See kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
See kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
See kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
See kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
Thank you for saving my sanity! |
See kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
See kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
I edited |
i.e on first control-plane. See kubernetes-sigs#11229 Signed-off-by: Mathieu Parent <mathieu.parent@insee.fr>
Quoting kube-vip/kube-vip#684 (comment):
So, a better solution is available now. |
Nope. This isn't working. This 684 is still any issue kube-vip/kube-vip#684 (comment) |
What happened?
Running an initial cluster creation breaks always on registering first master if kube-vip is enabled.
What did you expect to happen?
In the initial phase kube-vip does not block the registration of the first control-plane.
How can we reproduce it (as minimally and precisely as possible)?
Deploy a minimal cluster in a fresh environment and activate kube-vip beforehand via addons.yml.
OS
Linux 5.15.0-102-generic x86_64
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
Version of Ansible
ansible [core 2.16.7]
config file = ansible.cfg
configured module search path = ['library']
ansible python module location = venv/lib/python3.12/site-packages/ansible
ansible collection location = /Users/****/.ansible/collections:/usr/share/ansible/collections:/etc/ansible/collections:collections
executable location = venv/bin/ansible
python version = 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)] (venv/bin/python)
jinja version = 3.1.4
libyaml = True
Version of Python
Python 3.12.3
Version of Kubespray (commit)
Collection (2.25.0)
Network plugin used
calico
Full inventory with variables
all:
children:
bastion:
hosts:
bastion:
ansible_host: 10.12.3.61
ip: 10.12.3.61
kube_control_plane:
hosts:
hk8scpfra1:
ansible_host: 10.12.3.11
ip: 10.12.3.11
hk8scpfra2:
ansible_host: 10.12.3.12
ip: 10.12.3.12
hk8scpfra3:
ansible_host: 10.12.3.13
ip: 10.12.3.13
worker_node:
hosts:
hk8swfra1:
ansible_host: 10.12.3.21
ip: 10.12.3.21
hk8swfra2:
ansible_host: 10.12.3.22
ip: 10.12.3.22
hk8swfra3:
ansible_host: 10.12.3.23
ip: 10.12.3.23
vars:
node_labels:
node-role.kubernetes.io/worker: ""
node.cluster.x-k8s.io/nodegroup: worker
database_node:
hosts:
hk8sdbfra1:
ansible_host: 10.12.3.31
ip: 10.12.3.31
hk8sdbfra2:
ansible_host: 10.12.3.32
ip: 10.12.3.32
hk8sdbfra3:
ansible_host: 10.12.3.33
ip: 10.12.3.33
vars:
node_taints:
- 'dedicated=database:NoSchedule'
node_labels:
node-role.kubernetes.io/database: ""
node.cluster.x-k8s.io/nodegroup: database
monitor_node:
hosts:
hk8smfra1:
ansible_host: 10.12.3.41
ip: 10.12.3.41
hk8smfra2:
ansible_host: 10.12.3.42
ip: 10.12.3.42
hk8smfra3:
ansible_host: 10.12.3.43
ip: 10.12.3.43
vars:
node_taints:
- 'dedicated=monitor:NoSchedule'
node_labels:
node-role.kubernetes.io/monitor: ""
node.cluster.x-k8s.io/nodegroup: monitor
teleport_node:
hosts:
hk8stfra1:
ansible_host: 10.12.3.51
ip: 10.12.3.51
hk8stfra2:
ansible_host: 10.12.3.52
ip: 10.12.3.52
hk8stfra3:
ansible_host: 10.12.3.53
ip: 10.12.3.53
vars:
node_taints:
- 'dedicated=teleport:NoSchedule'
node_labels:
node-role.kubernetes.io/teleport: ""
node.cluster.x-k8s.io/nodegroup: teleport
k8s_cluster:
children:
kube_control_plane:
worker_node:
database_node:
monitor_node:
teleport_node:
etcd:
children:
kube_control_plane:
kube_node:
children:
worker_node:
database_node:
monitor_node:
teleport_node:
calico_rr:
hosts: {}
Command used to invoke ansible
ansible-playbook --inventory inventory-local.yml --become --become-user=root --private-key=~/.ssh/key_2024-04-10 cluster.yml
Output of ansible run
kubeadm | Initialize first master
failed
Anything else we need to know
Kubelet log shows connection timeout to Apiserver endpoint.
The text was updated successfully, but these errors were encountered: