New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iperf3 3.5 TCP option -n not working #768

Open
paaguti opened this Issue Jul 6, 2018 · 7 comments

Comments

Projects
None yet
3 participants
@paaguti

paaguti commented Jul 6, 2018

Context

  • Version of iperf3:
    iperf3.5

  • Hardware:
    amd64 laptop with Intel i5 CPU,

  • Operating system (and distribution, if any):
    Running on QEMU based VMs
    Host OS: Ubuntu 16.04.4LTS
    Guest OS: Alpine Linux 3.7+testing

  • Other relevant information (for example, non-default compilers,
    libraries, cross-compiling, etc.):

Please fill out one of the "Bug Report" or "Enhancement Request"
sections, as appropriate.

Bug Report

  • Expected Behavior
    iperf3 -c -n 1.8M should produce a data transaction with 1.8M bytes

  • Actual Behavior
    Dumping from the PCAP file and filtering the FIN/ACK packet:

       time       sport      dport          seq              ack      flags
     0.19117      56562       5201               38          2904690  FA

Expecting something around 1887436 and not 2904690 in the ack number

  • Steps to Reproduce
    VM1 as server, iperf3 -s ,capture packets with tshark
    VM2 as client, iperf3 -c <IP_VM1> -n 1.8M
    Analyse TCP SEQ and ACK numbers

  • Possible Solution
    Yet N/A

@paaguti

This comment has been minimized.

Show comment
Hide comment
@paaguti

paaguti Jul 7, 2018

Confirming for 3.6

Environment

QEMU based VMs with Ubuntu 16.04.4 LTS and development tools.
iperf3.6 compiled from source

Test procedure

The server is called with iperf3 -s -1 and the client with iperf3 -c <ip_server> -n <size> --reverse -J -logfile receiver.json
When examining the capture files, much more traffic is sent than expected (~3.6M). When digging in the resulting statistics file, the receiver stats show it received 100K bytes more than the specified 1.8M.

This can be critical in some applications, where we use iperf3 to simulate more complex applications...

receiver.json.gz

paaguti commented Jul 7, 2018

Confirming for 3.6

Environment

QEMU based VMs with Ubuntu 16.04.4 LTS and development tools.
iperf3.6 compiled from source

Test procedure

The server is called with iperf3 -s -1 and the client with iperf3 -c <ip_server> -n <size> --reverse -J -logfile receiver.json
When examining the capture files, much more traffic is sent than expected (~3.6M). When digging in the resulting statistics file, the receiver stats show it received 100K bytes more than the specified 1.8M.

This can be critical in some applications, where we use iperf3 to simulate more complex applications...

receiver.json.gz

@thomas-fossati

This comment has been minimized.

Show comment
Hide comment
@thomas-fossati

thomas-fossati Jul 15, 2018

I have observed the same anomalous behaviour as @paaguti running iperf 3.5 (cJSON 1.5.2) on Darwin 17.6.0.

I have attached a plot that shows the distribution of the observed transfers (in green) around the expected value (in red-ish), in my case 1Mbyte. See in particular the odd spike around 1.25M.

iperf-bytes-anomaly

thomas-fossati commented Jul 15, 2018

I have observed the same anomalous behaviour as @paaguti running iperf 3.5 (cJSON 1.5.2) on Darwin 17.6.0.

I have attached a plot that shows the distribution of the observed transfers (in green) around the expected value (in red-ish), in my case 1Mbyte. See in particular the odd spike around 1.25M.

iperf-bytes-anomaly

@bmah888

This comment has been minimized.

Show comment
Hide comment
@bmah888

bmah888 Jul 27, 2018

Member

1.8MB is a pretty small transfer compared to what iperf3 was designed for. At that size you'll run into some effects where the default sending length (-l flag) will be significant, because iperf3 only does writes (at the socket layer) of that particular length...it's 128K by default for TCP transfers.

Still I see some non-intuitive behavior even on my local macOS laptop sending to itself, and it feels like there's something not quite right here. I'm particularly puzzled by the fact that the number of bytes sent is different on different runs...if we ran for a certain length of time (which is the default mode of operation) I'd expect that, but not with -n specified.

Member

bmah888 commented Jul 27, 2018

1.8MB is a pretty small transfer compared to what iperf3 was designed for. At that size you'll run into some effects where the default sending length (-l flag) will be significant, because iperf3 only does writes (at the socket layer) of that particular length...it's 128K by default for TCP transfers.

Still I see some non-intuitive behavior even on my local macOS laptop sending to itself, and it feels like there's something not quite right here. I'm particularly puzzled by the fact that the number of bytes sent is different on different runs...if we ran for a certain length of time (which is the default mode of operation) I'd expect that, but not with -n specified.

@bmah888 bmah888 self-assigned this Jul 27, 2018

@bmah888 bmah888 added the bug label Jul 27, 2018

@bmah888

This comment has been minimized.

Show comment
Hide comment
@bmah888

bmah888 Jul 27, 2018

Member

This line (around line 2205 in src/iperf_api.c) appears to be the culprit:

    testp->multisend = 10;	/* arbitrary */

It basically allows the iperf3 sender to blow through the bytes limit by up to 10 writes (defined by whatever the -l parameter is). I am not sure what's the best way to fix it....changing 10 to 1 seems to solve the nondeterminism problem but might have some other effects we don't want.

Member

bmah888 commented Jul 27, 2018

This line (around line 2205 in src/iperf_api.c) appears to be the culprit:

    testp->multisend = 10;	/* arbitrary */

It basically allows the iperf3 sender to blow through the bytes limit by up to 10 writes (defined by whatever the -l parameter is). I am not sure what's the best way to fix it....changing 10 to 1 seems to solve the nondeterminism problem but might have some other effects we don't want.

@paaguti

This comment has been minimized.

Show comment
Hide comment
@paaguti

paaguti Jul 28, 2018

paaguti commented Jul 28, 2018

@paaguti

This comment has been minimized.

Show comment
Hide comment
@paaguti

paaguti Jul 31, 2018

On a Ubuntu 18.04 system:

Linux pavilion 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

I modified the command line argument handling:

		case 'n':
			test->settings->bytes = unit_atoi(optarg);
/*paag: if bytes are set, make it deterministic*/
			test->multisend = 1;
			client_flag = 1;
			break;

And now things seem to improve for an intuitive point of view. Looking at the stats as
{ bytes transfered: times} :

paag@pavilion:~/Devel/iperf$ ./kk.py client.json
{1966080: 100}
paag@pavilion:~/Devel/iperf$ ./kk.py server.json 
{1966080: 98, 1835008: 1, 1769419: 1}

Gotten from extracting:

        bytes = elem["intervals"][0]["sum"]["bytes"]

From the JSON statictics at the server and client for a run with

iperf3 --json -c 127.0.0.1 -n 1.8M

Repeated 100 times in two separate runs (one getting server and one getting client stats). So now, the transfer size is roughly 100K over the 1.8M, but at least, this behaviour is consistent.

paaguti commented Jul 31, 2018

On a Ubuntu 18.04 system:

Linux pavilion 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

I modified the command line argument handling:

		case 'n':
			test->settings->bytes = unit_atoi(optarg);
/*paag: if bytes are set, make it deterministic*/
			test->multisend = 1;
			client_flag = 1;
			break;

And now things seem to improve for an intuitive point of view. Looking at the stats as
{ bytes transfered: times} :

paag@pavilion:~/Devel/iperf$ ./kk.py client.json
{1966080: 100}
paag@pavilion:~/Devel/iperf$ ./kk.py server.json 
{1966080: 98, 1835008: 1, 1769419: 1}

Gotten from extracting:

        bytes = elem["intervals"][0]["sum"]["bytes"]

From the JSON statictics at the server and client for a run with

iperf3 --json -c 127.0.0.1 -n 1.8M

Repeated 100 times in two separate runs (one getting server and one getting client stats). So now, the transfer size is roughly 100K over the 1.8M, but at least, this behaviour is consistent.

@bmah888

This comment has been minimized.

Show comment
Hide comment
@bmah888

bmah888 Aug 3, 2018

Member

@paaguti: I'm not disputing that there's a bug here, just that I wasn't sure what was the best way to solve it. The patch you provided, or something similar, might be the best way forward. It might also work for -k, which I imagine might have a similar problem.

You understand right that TCP does not guarantee the size of packets on the wire. It's a bytestream oriented protocol, so it can consolidate small sends into larger packets, up to the MSS. Similarly, it can take huge sends from the socket layer but break them down into MSS-sized segments to go into MTU-sized IP packets.

Member

bmah888 commented Aug 3, 2018

@paaguti: I'm not disputing that there's a bug here, just that I wasn't sure what was the best way to solve it. The patch you provided, or something similar, might be the best way forward. It might also work for -k, which I imagine might have a similar problem.

You understand right that TCP does not guarantee the size of packets on the wire. It's a bytestream oriented protocol, so it can consolidate small sends into larger packets, up to the MSS. Similarly, it can take huge sends from the socket layer but break them down into MSS-sized segments to go into MTU-sized IP packets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment