Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
containers in docker 1.11 does not get same MTU as host #22297
BUG REPORT INFORMATION
Additional environment details (AWS, VirtualBox, physical, etc.):
Docker running in a CentOS 7.2 VM on RedHat RDO (Liberty) OpenStack cloud.
Steps to reproduce the issue:
Describe the results you received:
Container interface info:
Requests originating from the container that have packets larger than 1400 are dropped
Describe the results you expected:
I would expect functionality on par with pre-1.10 docker, where users could expect networking to work without user intervention in the form of setting the MTU on the daemon (since there is no sysconfig or other environmental configuration mechanism, this literally means editing the service script), editing the docker related iptables rules, or adjusting the MTU on the container.
Additional information you deem important (e.g. issue happens only occasionally):
The other tickets referenced have identified a couple workarounds including setting the --mtu flag on the docker daemon. That didn't work for us. After dropping the container and image, adjusting daemon args, and starting the container again, the MTU in the container remained at 1500 while host was 1400.
Our workaround involves inserting an iptables rule to mangle the packets in transit between the host and container:
I wouldn't consider any of the workarounds referenced to be a fix for this issue. In my opinion, the fix for the issue is to have the container MTU match host interface MTU upon container creation without user intervention.
Hm, that wasn't clear from your previous comment; how did you set the
docker run --rm debian:jessie sh -c "ip a | grep mtu" 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1 3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1 4: ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1 11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 300 qdisc noqueue state UP group default
Setting the container MTU to the host MTU is a hack. We shouldn't be implementing hacks to solve network issues.
I'm also seeing this issue in a Docker-on-OpenStack environment. The host instance has MTU 1400, Docker brings up docker0 with MTU 1500, network performance suffers.
Agreed, but reverting commit fd9d7c0 would allow many/most common use cases (container host with a single network interface) to work again without user intervention to set the MTU.
Yes, I have a system demonstrating this available to me now.
The hypervisor host has MTU 1500. Instance traffic is VXLAN encapsulated which means the instances get a lower MTU set, i.e. the 1400 shown here. (This is similar to the IPSEC encapsulation scenario at the start of this issue.)
Docker containers set the MTU to 1500:
and large transfers hang after the initial handshake:
tcpdump shows no ICMP fragmentation-needed packets on any of the interfaces.
If I set
Let me know if there's anything else I can provide to help.
hanging isn't the same as degradation. hanging is almost always indicative of something blocking icmp packets. I've tested this scenario and was never able to replicate any issues (on a vanilla box, with no tweaks). How customized is the host? Your prompt looks like you're running ubuntu trusty. Is this stock trusty, or a modified distribution, where did it come from, etc?
I've seen the exact situation with Docker on OpenStack. Performance suffers, building an image with apt-get upgrade can take hours. We were setting the --mtu flag manually and restarting the docker server but with Docker 1.12 and docker-machine, it's become problematic.
... but doing a
If I run everything in the host network, everything works perfectly fine.. but that is painful.
I'm not sure if docker-compose is doing something wrong or dockerd itself, or maybe we are supposed to configure these additional networks ourselves in detail.
EDIT: This is explained in the post below
(Running in OpenStack, were MTU is 1450)
Simple workaround for OpenStack and compose 2: (I will use 1450 MTU in this example)
Make sure to pass the correct
EDIT: When using compose 2, the
We can fix the additional networks by overriding the default network in the compose file (version 2)
networks: default: driver: bridge driver_opts: com.docker.network.driver.mtu: 1450
You probably have to manually delete the old network created by compose (
This is equivalent of doing a
Just overriding the default network to your manually created external network is also easy.
networks: default: external: name: <network_name>
When creating bridge networks manually, do not get confused by the initial mtu set on the interface. It will report an mtu of 1500, but as soon as you run containers, the values will adjust.
The more confusing part was that engine reference docs lists the wrong parameter name for specifying bridge mtu (Found this related issue #24921):
It took a fair amount of digging to finally get this working. I'm sure this can be translated to other ways of configuring networks. I just used the default network birdge for simplicity. As long as you find the right values for the driver you are using, you should be fine.
NOTE: This test was done on Ubuntu Trusty. There might be some underlying issues related to the network configuration that needs to be solved. All I know is that the instance gets its MTU of 1450 through dhcp and that's about it.
referenced this issue
Sep 3, 2016
This was referenced
Sep 20, 2016
From @jxstanford comment, if I undertood correctly, the original problem reported by this issue turned not to be a real problem:
Therefore I believe this issue should be closed.
@einarf Regarding your unexpected mtu value, please be aware the docker daemon
We do not have a
Same should be possible in your compose file via
As it usually happens, other issues piled on this one. In this case they were due to incorrect assumption that the
Given the original request won't be addressed and the other two sub-issues were addressed, we decided to close this issue.
Besides this, there in fact seems to be an outstanding issue about path MTU discovery, but there are already more specific issues opened for that.