New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.7 backports 2020-04-07 #10884
v1.7 backports 2020-04-07 #10884
Conversation
never-tell-me-the-odds Edit: need to update go-bindata |
Ah, caveat for backports for the moment: You'll need to manually run |
51b8429
to
b3305a8
Compare
never-tell-me-the-odds Edit: my version of |
b3305a8
to
90eaad9
Compare
never-tell-me-the-odds |
[ upstream commit a5e289d ] Newer microk8s requires the yaml arg to be specified as `--yaml`, not `-o yaml`. This breakage was backported to all microk8s release series. Fix the target. Related: canonical/microk8s#1042 Signed-off-by: Joe Stringer <joe@cilium.io> Signed-off-by: Chris Tarazi <chris@isovalent.com>
2492615
to
0bdeae0
Compare
Not sure what's happening here. I've updated the bindata and ensured that I am using
|
@christarazi any chance you have other files lying around the
|
Hmm, nope. It's a clean repo. I uninstalled all of Golang on my machine and reinstalled I even tried cloning this tree and I still get the same SHA. I also tried cloning a brand new repo, switching to |
Maybe it's the go-bindata binary? Here's my version:
|
0bdeae0
to
25019e8
Compare
Switched my
The only difference I see between yours and mine is the Golang toolchain version. However, my toolchain version matches Travis' version. But we don't know what version of |
@christarazi Are you using Cilium's fork of go-bindata? I generated the go-bindata file from inside the dev. VM to avoid any such issues when I did my round of backports. |
[ upstream commit b9ef15f ] While testing recently, I've noticed the case where we start to use the 169.254.42.1 loopback address for requests from outside: # tcpdump -i any port 30042 or 80 -n [...] 09:50:15.835618 IP 192.168.178.28.80 > 169.254.42.1.32882: Flags [S.], seq 689888661, ack 2221057376, win 65160, options [mss 1460,sackOK,TS val 3644467179 ecr 3644467179,nop,wscale 7], length 0 09:50:16.863069 IP 192.168.178.28.80 > 169.254.42.1.32882: Flags [S.], seq 689888661, ack 2221057376, win 65160, options [mss 1460,sackOK,TS val 3644468207 ecr 3644467179,nop,wscale 7], length 0 [...] This can happen if the backend IP (192.168.178.28) is the same IP that is doing the request to the frontend: # ./cilium/cilium service list ID Frontend Service Type Backend 1 192.168.178.29:30042 NodePort 1 => 192.168.178.28:80 Then we're hitting the codepath where we replace the 192.168.178.28 src address with 169.254.42.1 (IPV4_LOOPBACK) and try to send it back out the NodePort device, which is just wrong. That code was only intended for node local Pod to Pod ClusterIP handling we had before socket LB. For NodePort requests from outside the node this should not be done. We can handle this situation just fine in case of BPF-based SNAT, and in case of DSR it is expected to not work and we should not try to do any special NAT'ing or such. For east-west traffic on Cilium-managed nodes, we are simply using the socket LB anyway for everything, so we won't run into this corner case either. There is a unused DISABLE_LOOPBACK_LB define, to compile this out. Fix it by specifying this define from init.sh. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Chris Tarazi <chris@isovalent.com>
25019e8
to
e9471e8
Compare
never-tell-me-the-odds Thanks @pchaigno , that was it! I wasn't aware we had a fork of it. Edit: provisioning failure |
never-tell-me-the-odds |
Looks like provisioning error. test-with-kernel |
|
@christarazi Ah, right! Forgot this was a backport :/ |
Once this PR is merged, you can update the PR labels via: