Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too many gettimeofday calls #210

Closed
bmah888 opened this issue Sep 23, 2014 · 5 comments
Closed

Too many gettimeofday calls #210

bmah888 opened this issue Sep 23, 2014 · 5 comments

Comments

@bmah888
Copy link
Contributor

bmah888 commented Sep 23, 2014

From a comment by gallatin@gmail.com to the iperf-users@ list:

I was looking at the iperf3 source code, and its ... unfortunate ... that iperf3 still seems to be nearly as much of a gettimeofday() benchmark as it is a network benchmark.

For example, if I want to send UDP as fast as possible, I'll do something like:
iperf -c 172.18.126.63 -u -b 9999999M

If I use strace -c on this, I see:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 42.92    0.238950           1    287535           gettimeofday
 41.96    0.233611           3     71567           write
 14.87    0.082772           1     72900           select

The pattern seems to be:

gettimeofday
gettimeofday
select
gettimeofday
gettimeofday
write
gettimeofday
gettimeofday
select
<repeats>

This isn't a problem on *most* OSes (and it is getting better), but it can lead to surprisingly bad results on 10GbE or 40GbE networks on some OSes, especially one that chose an expensive timecounter (eg, one that does a PIO read access on every gettimeofday).  This is the big reason why I still use netperf.

Maybe iperf needs a "go fast" option that dispenses with all timing except to mark the time at the start & end of the test.
@bmah888
Copy link
Contributor Author

bmah888 commented Sep 24, 2014

Every time iperf3 goes to send a packet it does a timestamp to figure out the rate control thing. If it's OK to send a packet, it goes into the UDP packet sending code which...takes another timestamp to put into the packet for the jitter measurement. A few microseconds later. Sigh. :-)

@aaronmbr
Copy link
Contributor

I wonder if it'd make sense to push the "bandwidth" logic into the 'select' loop side of things instead of deep down in the iperf_send case. If the select timeouts are set so that it breaks out, and the "send" side of things happens at a given rate, I think it should Just Work. That could save some of the "gettimeofday", especially if the 'timer' stuff ends up tracking "number of times select has exited" to figure out the time, instead of calling gettimeofday at each exit. I dunno if it's worth it, but moving the bandwidth logic may make it easier to track what's actually happening.

@bmah888
Copy link
Contributor Author

bmah888 commented Nov 6, 2014

I tried a quick hack on this and wasn't able to reproduce the strace results. I need to play with this more.

@nomel
Copy link

nomel commented Mar 12, 2015

Using gettimeofday to measure time intervals is a bug, since it's not monotonic. Nice explanation here.

@bmah888
Copy link
Contributor Author

bmah888 commented Aug 24, 2020

Closing. The original issue is still valid, but it's not likely that we're going to do much if anything about it at this point. It can be re-opened if necessary.

@bmah888 bmah888 closed this as completed Aug 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants