New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Udp service not working on 1 node #93791
Comments
/sig network |
/triage unresolved Comment 🤖 I am a bot run by vllry. 👩🔬 |
/assign |
To clarify, just a few starter questions ...
|
/remove-triage unresolved |
Hi, Thanks for your response jayunit.
The response of sonobuoy:
coredns config:
|
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
i think we can close this. im suspecting this issue went away, it was maybe just an infra issue Also now we have a way to do breadth first data from the intra-pod tests, so it should be obvious which nodes are down if someone needs to test it again later |
/close |
@jayunit100: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
Udp service is not working with 1 node. (3 others are ok)
What you expected to happen:
Udp K8s service is working in my 4th node
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
I have build a 3 nodes cluster about 6 months and I have just add a node
When I'm trying to reach the coreDNS with the k8s service from my last node it's doesn't work:
But when I'm using the coredns pod ip from my last node, it's working well:
And when I'm using nslookup with tcp from my last node, it's working :
When I'm tracking network with tcpdump while nslookup I can see the udp request on my 4th node:
But I have nothing I my 2nd node who have the coredns pod.
My iptables rules related to the DNS service on my 4th node:
Environment:
kubectl version
):Ovh baremetal, created with kubeadm
cat /etc/os-release
):Kernel (e.g.
uname -a
):Linux 4.19.0-9-amd64 #1 SMP Debian 4.19.118-2+deb10u1 (2020-06-07) x86_64 GNU/Linux
Install tools:
Kubeadm
Network plugin and version (if this is a network-related bug):
quay.io/coreos/flannel:v0.11.0-amd64
Others:
flannel configuration:
kubeProxy configuration
kube flannel config
The text was updated successfully, but these errors were encountered: