New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weave-Net: client_address is 10.* despite externalTrafficPolicy: Local #51014
Comments
/sig network |
/kind bug |
I think Weave-Net support for |
Thanks @MrHohn. Do you think this issue should be closed or kept open until the fix upstream? |
@kachkaev Make sense to keep it open so folks won't open duplicate issue. Do you mind changing the title a bit (to indicate weave-net)? |
@MrHohn done. I brought up my cluster using the kubeadm tutorial and other newbies like me are likely to take the same path. Just wondering if it's worth warning people in a weave net tab that they might get the same issue? What other networks might have malfunctioning Turns out that in order to get the remote IPs, I'll have to tear down my kubeadm instance and launch a new one on the same machine (because of |
Certainly make sense to me if we could find a proper way to warn people. I'm a bit worried about putting warning directly on "weave net tab". From https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network I see 4 other CNI plugins. Putting warning only on weave tab may imply cc @kubernetes/sig-docs-maintainers for advice. |
FWIW, sent kubernetes/website#5190 to include couple known issues in the |
Anymore involvement needed on the k8s side here or should this report now be moved to a Weave issue? |
Perhaps for people who'll take a similar path as me, it'd be great to mention the caveat at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ Being a non-sysadmin person, I had no idea of what a pod network was at the time of reading, so for me it was just another step to get through ASAP. Because Weave network does not require |
Ran into the same issue |
I think I am having the same issue!! Will unplug weave for a replacement and see if it fixes it. Anyone know a network that will definitely do externalTrafficPolicy: Local 100% for sure? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
According to the release note, this feature has been implemented in 2.4.0. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
1 similar comment
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This issue should be closed since it is specific to a CNI plugin and further discussion should happen in a WeaveNet issue /close |
@cmluciano: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
(or a missing note / caveat in https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)
What happened:
I noticed that my ingress controller (traefik) was providing wrong values for the
X-Forwarded-For
HTTP header. It turned out that even when traefik was bypassed, I was not able to see a real remote client IP in the HTTP headers. Runninggcr.io/google_containers/echoserver:1.8
as a NodePort or LoadBalancer service consistently reportedclient_address=10.32.0.1
in the requests (same asX-Forwarded-For
assigned by traefik earlier).I read Using Source IP, which suggested that such a problem could exist in multi-node clusters. Setting
externalTrafficPolicy: Local
for a service of typeNodePort
orLoadBalancer
was supposed to be a cure, however, this did not help me unfortunately. Even a diligent repetition of all the steps mentioned in the topic did not allow me to see a real remote IP address.My k8s cluster is supersimple – it has been created with kubeadm and sits in a single public-facing KVM. I've been using docker on the same machines previously and never had issues with figuring out a remote IP inside the containers. Maybe the issue is somehow related to the weave network that I'm using or there is something else with the setup that breaks the expected behavior?
What can you guys suggest?
What you expected to happen:
I expected
remote_address
to be equal ty my client's IP address, i.e. the same one I see whentcpdump -nnn -i any port $SERVICEPORT
on the server.How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):A single-node cluster (Ubuntu KVM) at https://firstvds.ru/
uname -a
):Using weave as a pod network.
ps aux | grep -F 10.32
returns:ps aux | grep api
The text was updated successfully, but these errors were encountered: