Kubernetes service IP / container DNAT broken in 1262.0.0 #1743

Closed
bison opened this Issue Jan 2, 2017 · 4 comments
@bison
Member
bison commented Jan 2, 2017

Issue Report

Under Kubernetes, pod to pod communication via service IP within a
single node is broken in latest CoreOS alpha (1262.0.0). Downgrading
to previous alpha resolves the issue. The issue is not specific to
Kubernetes however.

Bug

CoreOS Version

$ cat /etc/os-release
NAME=CoreOS
ID=coreos
VERSION=1262.0.0
VERSION_ID=1262.0.0
BUILD_ID=2016-12-14-2334
PRETTY_NAME="CoreOS 1262.0.0 (Ladybug)"
ANSI_COLOR="1;32"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://github.com/coreos/bugs/issues"

Environment

Observed on both bare metal and Vagrant + VirtualBox locally.

Expected Behavior

In Kubernetes, I can define a service which targets a set of pods and
makes them reachable via a virtual IP. With kube-proxy running in
iptables mode, Kubernetes will configure NAT rules to redirect traffic
destined for that virtual IP to the individual pods. That should work
for traffic originating from any node in the cluster.

Actual Behavior

With a cluster running on the latest alpha (1262.0.0), pods cannot be
reached via their service IP when the traffic originates from another
pod on the same node as the destination.

The following does work on the same node:

  • Host to service IP
  • Pods running in the host net namespace to service IP

Reproduction Steps

This isn't actually specific to Kubernetes, I have an alpha-nat
branch of coreos-vagrant that will start a VM with a Docker container
and iptables rules similar to what Kubernetes uses -- along with trace
rules for debugging.

Check out that branch and run either ./start-broken.sh or
./start-working.sh then vagrant ssh into the VM and run the
following:

# Works from host
core@core-01 ~ $ curl http://10.3.0.100:8080
CLIENT VALUES:
client_address=10.0.2.15
...


# Fails from another container on broken version
core@core-01 ~ $ docker run --rm busybox wget -O- -T5 http://10.3.0.100:8080
Connecting to 10.3.0.100:8080 (10.3.0.100:8080)
wget: download timed out

You could also use cloud-config in the user-data file on that branch
on any other platform.

Another option is to launch a Kubernetes cluster with a single
scheduleable node and start a pod with an accompanying service. Other
pods on the same node will not be able to communicate using the
service IP. I've been using this for testing.

Other Information

Starting the same echoheaders container under rkt with the default
ptp networking and configuring similar NAT rules seems to work as
expected from other containers, so this might only be happening when
attaching containers to a bridge.

coreos/coreos-overlay#2300 landed in 1262.0.0 -- I tried not marking
the interfaces unmanaged with overrides in /etc/systemd/network/ but
it didn't seem to help.

@martynd
martynd commented Jan 3, 2017

I encountered the same issue doing a straight upgrade from 1185.2.0 to 1262.0.0.

Everything worked perfectly except the aforementioned service connectivity issues from within containers (Host machine to service, other machine to service, direct ip from container etc all worked).

Rolling back to 1185.2.0 worked after deleting /var/lib/docker/network/files/local-kv.db

@bison
Member
bison commented Jan 4, 2017

If I didn't screw up the git bisect, I think this was introduced in torvalds/linux@e3b37f1. That patch seems to have caused a few issues which have since been fixed. The tip of master, 4.10.0-rc2-0f64df3, is working as expected for me.

@crawford crawford self-assigned this Jan 5, 2017
@crawford crawford referenced this issue in coreos/coreos-overlay Jan 5, 2017
Merged

sys-kernel/coreos-*: bump to 4.8.15 #2353

@crawford crawford added this to the CoreOS Alpha 1284.0.0 milestone Jan 5, 2017
@crawford
Member
crawford commented Jan 5, 2017

Should be fixed by coreos/coreos-overlay#2353.

@crawford crawford closed this Jan 5, 2017
@jbw976 jbw976 referenced this issue in coreos/coreos-kubernetes Jan 5, 2017
Open

DNS resolution failing from pods #794

@crawford
Member

This is still present in 4.9.3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment