You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running k3s, and use the default Flannel. So far, it has been working quite well but I have come across a bizarre routing "issue" that I honestly can not wrap my mind around. All I have done is set these options for flannel:
flannel-external-ip: true
Current Behavior
When I run a cURL request that should hit Traefik (and which does so) from a different system in the network, the source IP is properly and correctly identified as the remote IP. But when I do the exact same ON the node itself, the CNI interface's IP is reported as the source. Oh both hosts, router.birb.it resolves to 192.168.1.3, which is the node's IP. But only on the host with k3s/flannel does the "wrong" IP get reported as client.
Now, granted, my NAT-fu is quite bad, so I tried to google the issue and came as far as to see others having used host-gw mode. But, I plan to use another node connected via Headscale/Tailscale, which to my knowledge does not really do Layer 2, so this may not be an option. What I did find through all of this, is this:
What sticks out to me are the LOCAL lines, which to me indicates that local traffic is routed differently than external. But this also means that the origin IP is lost - or rather, not what one would expect to be there.
Possible Solution
Honestly, I have no idea. I am quite literally lost.
Steps to Reproduce (for bugs)
Deploy k3s, use the node-external-ip and flannel-external-ip options. Also, modify the Traefik config to set externalTrafficPolicy: Local.
Bring up a service like whoami. Not neccessary, but might help.
On the k3s node, attempt to curl something on the external IP - observe the Traefik logs
On another adjacent node - on the same network - run the same command.
You should now see two different IPs reported as the client IPs in Traefik.
(This is an external name service I use to reverse-proxy to my modem's UI - it was the smallest deployment to experiment with.)
Context
I am trying to set up Traefik middlewares to configure forwardauthentication and IP range blacklisting. Everything on my local network should always be allowed (match: Host(...) && ClientIP("192.168.1.0/24")) and everything else should require authentication through a middleware. This already kinda works; except, I can not access anything on the node itself, which is a bummer and might become a problem long-term...
Your Environment
Flannel version: 0.24.2
Backend used (e.g. vxlan or udp): vxlan, as per default
Something in addition that I noticed: When I request through the VPN, I only see the ServiceLB IP; not that this is relevant here, but related at the very least.
# On remote VPS:
root@birb ~# tailscale ip -4 cluserboi
100.64.0.2
root@birb ~# curl --resolve router.birb.it:443:100.64.0.2 --head -L https://router.birb.it
HTTP/2 200
...
# On k3s node's Traefik log:
10.42.0.31 - - [10/May/2024:11:06:51 +0000] "HEAD / HTTP/2.0" 200 0 "-" "curl/7.81.0" 1882 "proxy-router-proxy-tr-2ed3520c3d71313735b6@kubernetescrd" "http://192.168.1.1:80" 4ms
# And, the IP to the pod:
# kubectl get -A pods -o=jsonpath="{range .items[*]}{.metadata.name}{'='}{.status.podIP}{','}{end}" | tr "," "\n" | grep 31
svclb-traefik-2ae61580-zlqp8=10.42.0.31
The VPN is configured with 100.64.0.0/24, so entirely different subnet.
Are you using tailscale to connect the nodes of the cluster? I think that's not a flannel bug but probably your setup needs a specific configuration related on what you are trying to do. Considering that the iptables rules that you shared are not part of Flannel could you check your routing table? I think that locally the node knows the pod that is hosting the service using its own IP and the routing process will use one of the host IPs that is on that subnet.
Expected Behavior
I am running k3s, and use the default Flannel. So far, it has been working quite well but I have come across a bizarre routing "issue" that I honestly can not wrap my mind around. All I have done is set these options for flannel:
Current Behavior
When I run a cURL request that should hit Traefik (and which does so) from a different system in the network, the source IP is properly and correctly identified as the remote IP. But when I do the exact same ON the node itself, the CNI interface's IP is reported as the source. Oh both hosts,
router.birb.it
resolves to192.168.1.3
, which is the node's IP. But only on the host with k3s/flannel does the "wrong" IP get reported as client.Now, granted, my NAT-fu is quite bad, so I tried to google the issue and came as far as to see others having used
host-gw
mode. But, I plan to use another node connected via Headscale/Tailscale, which to my knowledge does not really do Layer 2, so this may not be an option. What I did find through all of this, is this:What sticks out to me are the
LOCAL
lines, which to me indicates that local traffic is routed differently than external. But this also means that the origin IP is lost - or rather, not what one would expect to be there.Possible Solution
Honestly, I have no idea. I am quite literally lost.
Steps to Reproduce (for bugs)
node-external-ip
andflannel-external-ip
options. Also, modify the Traefik config to setexternalTrafficPolicy: Local
.whoami
. Not neccessary, but might help.curl
something on the external IP - observe the Traefik logsExample:
(This is an external name service I use to reverse-proxy to my modem's UI - it was the smallest deployment to experiment with.)
Context
I am trying to set up Traefik middlewares to configure forwardauthentication and IP range blacklisting. Everything on my local network should always be allowed (
match: Host(...) && ClientIP("192.168.1.0/24")
) and everything else should require authentication through a middleware. This already kinda works; except, I can not access anything on the node itself, which is a bummer and might become a problem long-term...Your Environment
The text was updated successfully, but these errors were encountered: