-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UDP sending is bursty #386
Comments
I have seen issues caused by this where a sender on a fast port (eg: gigabit) sending to a receiver on a slow port (eg: 100 megabits) will experience severe packet loss due to switch buffers being overloaded and packets dropped. Even with a setting of iPerf version 2 did not have this issue. |
I have the same problem. Please fix it, because of this problem iperf3 can not be used for UDP tests :( |
The throttling algorithm seems to be working on 100ms intervals (https://github.com/esnet/iperf/blob/master/src/iperf_api.c#L1203). |
Good suggestion!
Actually, here the problem was mitigated by reducing the buffer size. The packets are not dropped at the sender, so increasing the buffer there wouldn't help. The drops occur at a crappy consumer grade router/access point in the path, which can't handle Gigabit traffic bursts. So reducing the buffer helps because packets are sent at a burst rate lower than the link speed (but the average rate still achieves the One other possible solution is to put a shaping buffer (token bucket) between the server and this crappy router, and set it to a speed higher than |
Actually even "high end" switches have this problem. I tested on a Juniper EX-2200 (office-class switch). The trend is that switching ASICs have small buffers which are already divided between all the ports for the QoS queues. |
If you are running iperf-3.1.3 or later or recent Linux, it can do fair-queue based flow control at the socket level, which should smooth out the bursts at the device driver level. |
We believe we have a fix for this in #460...currently on master but not (yet) in a release. Closing. |
I found this issue while debugging some strange packet loss happening over UDP with a bandwidth setting (
-b
) much lower than the link/path capacity. I was working under the assumption that using the-b
flag would result in a nice constant-bitrate traffic, but this turned out to be wrong.Some plots to show why. These are packet captures from the sender, as seen through Wireshark IO graph module, with a granularity of 0.01 s.
The first plot shows the behaviour of iperf2 (
-u -b 30M
). The sending rate is quite stable over the whole test. The receiver reports a 30 Mbit/s achieved throughput, with ~0 packet loss.Next, we have iperf3 (
-u -b 30M -l 1470
, the packet size is set in order to avoid IP fragmentation). We see a very bursty sending rate. Note the Y-axis max, which is 1 order of magnitude bigger than the previous plot. In this case, I observe a 20-25% packet loss, with an according reduction in achieved throughput, as a router in the middle starts dropping packets.Finally, I started fiddling with the send buffer (
-w
). This last plot shows iperf 3 with-u -b 30M -l 1470 -w 14700
(the send buffer takes 10 packets). Maybe it's hard to see from the plot, but the send rate smooths out quite a bit, coming closer to a 50/50 duty cycle. The peaks are also lower (~105 vs. ~360). The most interesting observation: with this settings packet loss disappears.I don't know if this burstiness is desired behaviour, but I have some objections to it:
The text was updated successfully, but these errors were encountered: