Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

portmap: delete UDP conntrack entries on teardown #123

Open
squeed opened this issue Feb 20, 2018 · 5 comments

Comments

@squeed
Copy link
Member

commented Feb 20, 2018

As observed in kubernetes/kubernetes#59033, a quick teardown + spinup of portmappings can cause UDP "flows" to be lost, thanks to stale conntrack entries.

From the original issue:

  1. A server pod exposes UDP host port.
  2. A client sends packets to the server pod thru the host port. This creates a conntrack entry.
  3. The server pod's IP changs due to whatever reason, such as pod gets recreated.
  4. Due to the nature of UDP and conntrack, new request from the same client to the host port will keep hitting the stale conntrack entry.
  5. Client observes traffic black hole.

@squeed squeed added the bug label Feb 20, 2018

@brantburnett

This comment has been minimized.

Copy link

commented Aug 24, 2018

I have found that this bug doesn't seem to only apply to changing pod IPs. It can also apply if there is incoming traffic that is being dropped because there is no pod, and then a pod is added that should start receiving the traffic.

Complete details and steps to reproduce: projectcalico/felix#1880

@vmendi

This comment has been minimized.

Copy link

commented Mar 21, 2019

We are also affected by this. We lose metrics whenever there's a restart of the Datadog agent. Is there any plans to fix it? any workaround available?

Thanks

@Suckzoo

This comment has been minimized.

Copy link

commented Apr 9, 2019

Seems like this issue has been opened for a year. Is there any plan to fix this issue? Or, could you let me know how to fix this problem by my hand?

@RohanKurane

This comment has been minimized.

Copy link

commented May 9, 2019

Hello,

I think I am hitting a similar issue.

I deploy a pod with hostport and created using type:portmap.
Then when I try to delete the pod, and re-deploy the same pod with same name, I get the following error -

Warning  FailedCreatePodSandBox  1m    kubelet, ip-xxxxxxxxx.ec2.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to add hostport mapping for sandbox k8s_server_default_436fb151-71b1-11e9-b2d9-128d1a3304a4_0(7993b317dc75381c0ed08a75019fde6c2fe70aaf5697af6cc1f0e85c7afddfd6): cannot open hostport 50051 for pod k8s_server_default_436fb151-71b1-11e9-b2d9-128d1a3304a4_0_: listen tcp :50051: bind: address already in use

Is this the same issue ? If so, is there a workaround till this issue is fixed ?
I am using CRI-O runtime and not docker. I do not believe I saw this error when I used docker.

I am using the following CNI plugins

     wget -qO- https://github.com/containernetworking/cni/releases/download/${CNI_VERSION}/cni-amd64-${CNI_VERSION}.tgz | bsdtar -xvf - -C /opt/cni/bin
     wget -qO- https://github.com/containernetworking/plugins/releases/download/${CNI_PLUGIN_VERSION}/cni-plugins-amd64-${CNI_PLUGIN_VERSION}.tgz | bsdtar -xvf - -C /opt/cni/bin

Thanks

@Simwar

This comment has been minimized.

Copy link

commented May 16, 2019

It seems like this PR aimed to fix the issue: kubernetes/kubernetes#59286
But this one is needed to have the conntrack binary installed as well in the right path I believe: kubernetes/kubernetes#64640
On GKE for example, as you need to run the conntrack binary via the toolbox (toolbox conntrack -D -p udp)
The workaround is to run toolbox conntrack -D -p udp after the pod is restarted to clean up the conntrack entry.

There is a workaround but this is not ideal.
You can use an initContainer to run the conntrack command:

initContainers: 
        - image: <conntrack-image>
          imagePullPolicy: IfNotPresent 
          name: conntrack 
          securityContext: 
            allowPrivilegeEscalation: true 
            capabilities: 
              add: ["NET_ADMIN"] 
          command: ['sh', '-c', 'conntrack -D -p udp']

You need to set hostNetwork: true for this to work, so this is not ideal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.