New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
One Node loose private and external ip address #68270
Comments
We are already discussing this problem, but I think this might be the correct place for it. What we currently think to know:
Related issues: |
Some notes:
I do have If you need any log i would be happy to share it |
Do you have any connection errors when grep'ing kubelet logs for "node status"? |
Yes log is full of that, please note that I did restart kubelet
|
/cc |
Hey guys, We are getting this same problem with the OpenStack Cloud Provider.
|
We met this problem with aws cloud provider, and #65226 fixed this. |
Thanks for the feedback. So far, we know almost all cloud provider(AWS, Azure, OpenStack and vsphere) affected by this bug. |
I'm going to add my 2c to this one. I have had issues bringing up clusters, not where the IP goes missing, but internal/external IP's are set to the eth0 self-assigned IPv6 address and the only way to fix it is delete the node and recreate it. |
This is a workaround for the bug described at kubernetes/kubernetes#68270, where nodes lose their IP address. A fix is supposed to be in 1.11.6, but this should overcome the issue for the time being.
Seems to be fixed in k8s v1.11.5 |
@stieler-it fyi, fwiw, we just encountered it today on |
It's merged into v1.11.6 not v1.11.5, just FYI. |
New to kubernetes, but I believe I may be seeing this issue as well in
Environment:
|
We are running into the same issue. Does anyone see this in 1.13 as well? |
I'm having this issue with 1.13.1. One of the masters (HA configurations) lost the "addresses" object, where InternalIP, InternalDNS, and Hostname are listed. I restarted kubelet several times and even rebooted the machine to no avail. anybody know how to get it back? |
I've got this issue on v1.14.1 as well. My cloud provider is AWS and two nodes has lose Internal IP address. Restart of kubelet didn't fixed this problem. As well as restart of node didn't help. |
Tried everything in kubernetes/kubeadm#203 to no avail |
Active in 1.14.3
Kubernetes version (use kubectl version):
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release):
Kernel (e.g. uname -a):
x86_64 GNU/Linux
|
Sorry folks, thought this was fixed in #65226 but based on recent reports looks like it isn't. Added this on the SIG Cloud Provider backlog here kubernetes/cloud-provider#37 and will prioritize for v1.16. |
/sig cloud-provider |
We haven't seen this issue for a very long time now. |
We also have not seen this issue anymore since 1.11.6. |
Thanks folks! Closing for now, please re-open if you can reproduce /close |
@andrewsykim: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I'm seeing this issue in v1.18.5. We have a 6 node (3 control-plane, 3 worker) HA cluster and this happens after restart of the nodes. Also, I see that the order of restarts affect which node loses the IPs. I want to note that the loss of IP happened even if the node was drained before the restart. Of the three control-plane nodes, which ever gets restarted last looses it internal and external IPs. For example if cn1, c2, and cn3 are my control plane nodes:
Environment:
/open |
@mohideen what's your CNI? |
We have the latest canal (Calico v3.15.0 + Flannel v0.11.0 in host-gw mode) |
We have exactly the same problem in v1.18.3 and v1.18.6. Environment: |
For me it only happens when the following is true. Tested with K8s v1.14.6, 1.17.9 and 1.18.6
It does not happen when the flag is set to --cloud-provider=vsphere |
@AcidAngel21 your issue is likely related to kubernetes/cloud-provider-vsphere#338 |
@andrewsykim Thanks, That really helped. we had 3 workers and all 3 were not showing the IP. I added exclude-nics=cali*,docker*,tun* to /etc/vmware-tools/tools.conf and restarted open-vm-tools.service. Also restarted CPI pods and the IPs are showing now. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
/sig node
What happened:
One of my nodes engine02 lose the internal and external IP after some time
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
If i do restart for the engine02 node, the ip do show for some time then after a while it just disappear . i found this while debuging an issue
Anything else we need to know?:
I did not find any thing in log but i did not know exactly what log i should search in
Environment:
kubectl version
):Vsphere
uname -a
):Linux engine02 3.10.0-693.21.1.el7.x86_64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
The text was updated successfully, but these errors were encountered: