-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cri-o needs cni workaround #1804
Comments
could you verify if it works when you try to access another host IP that is not localhost? For example, it works when I use the local IP Using the firewall plugin from containernetworking/plugins#75 doesn't seem to make a difference. /cc @dcbw |
@giuseppe I tested on AWS and was able to get to my webserver through the AWS private IP address. So it only hangs on localhost/127.0.0.1 like you said |
@mike-nguyen can you run 'iptables-save' on the node and post the results somewhere? |
@mike-nguyen can you "tcpdump -vvvne cni0" and then do the curl localhost:80? |
@dcbw tcpdump -vvvne -i cn0 gives me no output when I curl localhost:80 |
@mike-nguyen Ok, so clearly the traffic isn't even getting to cni0. That helps narrow things down. |
@dcbw I got it to work with the following changes (tested manually): @@ -31,9 +31,10 @@
-A PREROUTING -m comment --comment "kube hostport portals" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS
-A OUTPUT -m comment --comment "kube hostport portals" -m addrtype --dst-type LOCAL -j KUBE-HOSTPORTS
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
+-A POSTROUTING -d 10.88.0.0/16 -m comment --comment "name: \"crio-bridge\" id: \"bf266838ac8d1fc99c6efee82fbe9333d55ae3eecec3248e9e2f568561d1ad8d\"" -j CNI-08a451cf4b6dd952bc70028c
-A POSTROUTING -s 10.88.0.0/16 -m comment --comment "name: \"crio-bridge\" id: \"bf266838ac8d1fc99c6efee82fbe9333d55ae3eecec3248e9e2f568561d1ad8d\"" -j CNI-08a451cf4b6dd952bc70028c
-A POSTROUTING -s 127.0.0.0/8 -o lo -m comment --comment "SNAT for localhost access to hostports" -j MASQUERADE
--A CNI-08a451cf4b6dd952bc70028c -d 10.88.0.0/16 -m comment --comment "name: \"crio-bridge\" id: \"bf266838ac8d1fc99c6efee82fbe9333d55ae3eecec3248e9e2f568561d1ad8d\"" -j ACCEPT
+-A CNI-08a451cf4b6dd952bc70028c -d 10.88.0.0/16 -m comment --comment "name: \"crio-bridge\" id: \"bf266838ac8d1fc99c6efee82fbe9333d55ae3eecec3248e9e2f568561d1ad8d\"" -j MASQUERADE
-A CNI-08a451cf4b6dd952bc70028c ! -d 224.0.0.0/4 -m comment --comment "name: \"crio-bridge\" id: \"bf266838ac8d1fc99c6efee82fbe9333d55ae3eecec3248e9e2f568561d1ad8d\"" -j MASQUERADE
-A KUBE-HOSTPORTS -p tcp -m comment --comment "k8s_nginx-rhel7_default_6cce1e24d665240f8d2e114edbf52189_3_ hostport 8081" -m tcp --dport 8081 -j KUBE-HP-WWG7ZHMF34UGM273
-A KUBE-HP-WWG7ZHMF34UGM273 -s 10.88.0.59/32 -m comment --comment "k8s_nginx-rhel7_default_6cce1e24d665240f8d2e114edbf52189_3_ hostport 8081" -j KUBE-MARK-MASQ |
@dcbw does the change look fine? |
@dcbw -- looks like giuseppe has something that may work. What do you think? |
If we can get this looked at soon I'd appreciate it. This work blocks a card @mike-nguyen is trying to close out. |
You need to masquerade, but only when the source address is 127.0.0.1. Check out https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap#snat-masquerade |
@squeed how do we get this fixed in the CNI plugins? |
The CNI plugins already handle this; what version and what CNI configuration are you using? |
@squeed Sorry we swapped jira boards and the card got lost in the migration. Let me know if you need anymore information.
These are the configs in 100-crio-bridge.conf
200-loopback.conf
87-podman-bridge.conflist
|
I'm seeing this error with:
Any solution available? |
so far any workaround ? |
Hello. I'm not sure I'm having the same issue, but looking at what podman does and the differences, I found that by adding the following ip-tables rule, it works : -A KUBE-HP-GSFDA2UGA5JYS6WS -s 10.85.1.12/32 -m comment --comment "k8s_nginx-sandbox_default_hdishd83djaidwnduwk28bcsb_1_ hostport 8089" -j KUBE-MARK-MASQ
+ -A KUBE-HP-GSFDA2UGA5JYS6WS -s 127.0.0.1/32 -m comment --comment "k8s_nginx-sandbox_default_hdishd83djaidwnduwk28bcsb_1_ hostport 8089" -j KUBE-MARK-MASQ My understanding is that
And that the mark allows the following rules to be run:
and in particular the last one. I don't really understand what it does but it may be because as 127.0.0.1 exists in the container, without masquerading the source IP the response packets would never go out of it. Hope it helps. |
Hi again. Looking at the source code, I would add the following code here: https://github.com/cri-o/cri-o/blob/main/internal/hostport/hostport_manager.go#L151 // SNAT if the traffic comes from the host through localhost (127.0.0.1)
if pm.HostIP == "" || pm.HostIP == "0.0.0.0" || pm.HostIP == "::" || pm.HostIP == "127.0.0.1" {
writeLine(natRules, "-A", string(chain),
"-m", "comment", "--comment", fmt.Sprintf(`"%s hostport %d"`, podFullName, pm.HostPort),
"-s 127.0.0.1",
"-j", string(iptablesproxy.KubeMarkMasqChain))
} I'm sorry but I am unable to test myself. |
thanks for looking into it! would you be interested in opening a PR @amartinunowhy ? cc @aojea |
/assign |
A friendly reminder that this issue had no activity for 30 days. |
/unassign I'm sorry but I didn't have time for this, squeed reference to the cni and seems to match the solutions proposed I'm happy to act as a reviewer, but I will not able to send PRs for this |
it doesn't seem like this is a very high prio bug (there aren't that many people looking for a fix). It also doesn't seem we have the bandwidth to fix the bug. If someone else sees the bug and wants it fixed, please reopen. Otherwise, I'm leaving it closed. |
Description
Port forwarding from the container to the host doesn't seem to be working. I need to run with
hostNetwork: true
to workaround it.I am using kubelet to deploy a static nginx pod with containerPort:80 and hostPort:80. When kubelet is using docker, I can
curl http://localhost:80
and I get the nginx default page. When I switch kubelet to run with crio, I can see the container running butcurl http://localhost:80
does not respond. This is a similar issue I was seeing with exposing ports using podman.cri-o might need a version of containers/podman#1431
mheon mentioned:
Steps to reproduce the issue:
1.
start kubelet
/usr/bin/hyperkube kubelet --config /etc/kubernetes/kubeletconfig --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --runtime-request-timeout=10m
4
curl http://localhost:80
Describe the results you received:
Cannot access the nginx container through the exposed host port
Describe the results you expected:
Able to access the nginx container through the exposed host port
Additional information you deem important (e.g. issue happens only occasionally):
Always happens
Output of
crio --version
:Additional Environment Information
RHCOS 4.0.5796 in AWS
The text was updated successfully, but these errors were encountered: