-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
poor NAT & networking performance #7857
Comments
@hustcat How were you benchmarking? |
run netserver on one machine, and run netperf on the other 4 machines with 400 processes each machine. This is the client script. #!/bin/bash num=$1 |
For bridge only, see #8277, qdisc of veth will become the bottleneck,see here As we can see, bridge only with set veth's txqueuelen to zero, performance loss is small. |
Is there an issue already covering native support of MACVLAN in Docker? I've read how to accomplish it here, but would like to follow the issue that could unlock this feature. |
@hustcat Have you benchmarked NAT with bridge and veth txqueuelen=0? It would be interesting to see where that would fit in your benchmark above. |
@unclejack Yes, I've tested this, the result(243145/s) is better than NAT with default veth's txqueuelen, but is still very poor, because conntrack module(NAT will use it) of kernel become bottleneck. |
Have a same issue for a container running in KVM. I am still not sure why, as the other containers connectivity is fine. For some reason I also cannot access this container through localhost: I have to explicitly mention the IP of the |
I've tested UDP bandwidth by single netper instance with the same configuration as your "No NAT" case, and the result is that container has is about 2/3 of the bandwidth of host-host. Is that reasonable? Container to Host: Bridge to Host: CPU: 100% Linux compute2 3.10.74-rt79 #2 SMP PREEMPT RT Fri May 29 15:30:35 CST 2015 x86_64 x86_64 x86_64 GNU/Linux I had had the txqueuelen = 0 set. I found the cause, I had not had RPS enabled. |
Is this actionable or is this just a side-effect of using veth? |
@cpuguy83 This is indeed the kind of performance you get through NAT. It is still a problem because that's the default and some resort to host network to get around this problem. |
We are implementing microservices using docker and the poor network performance is something we found out in our performance tests. For the time being, we are using host network instead of default bridge network. However, I am just curious if there is any plan to fix this issue. This ticket is open from the past one year with no updates. |
@priyadarsh How are you using the network? |
@cpuguy83 Hi. We have deployed a rest-json based micro service as a docker image and things work fine when the response size is in Kbs. However, there are cases where the response size exceeds 3Mb. In such cases, the download time is over a minute. We have tried gzipping the response but to our surprise it took more time. The difference is network latency between host and bridge is very evident with such package size. |
@priyadarsh I'm more interested in how you are accessing these services? |
@cpuguy83 The client consuming this service(via http) is running on a different host and not deployed as a docker image. |
@priyadarsh Thank you. I would not expect the bridge interfaces or NAT to give such bad overhead. |
docker-proxy seems to use a lot of CPU. Therefor: If your CPU is slow, so will your networking. I really thought they could have achieved networking with some netfilter trix instead of an executable that sucks CPU power. Unlucky. |
@pompomJuice docker-proxy is (should) only be used for local traffic, this is to facilitate hairpinning traffic back into the container. |
Aah, I see. Thanks @cpuguy83 |
The default bridge network is known to be slow (1) and potentially flaky (2). Switching to host networking is a desperate attempt to reduce flaps in our nightly package builds. (1) moby/moby#7857 (2) moby/moby#11407 Reviewed at https://reviews.apache.org/r/50716/
Docker 1.12 has support for macvlan and ipvlan (l2 and l3) which should give even better performance than bridge networking and does not require nating. Closing I believe this solves the problem. |
I use netperf to test network performance. These are some result:
network packet size Sum Trans Rate/s
no docker 1 742020
Bridge+NAT 1 213721
Bridge only 1 432079
docker host 1 674737
we can see, NAT's performance is very poor, and bridge(only) is also declined. Is there any way to improve performance while maintaining network isolation, such as SR-IOV in KVM?
The text was updated successfully, but these errors were encountered: