New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rtt is not the same as ipg (and by ipg I think we mean inter-packet-delay) #142

Open
mcallaghan-sandvine opened this Issue Aug 22, 2018 · 1 comment

Comments

Projects
None yet
2 participants
@mcallaghan-sandvine
Contributor

mcallaghan-sandvine commented Aug 22, 2018

related to #137 ... (continuation from that story)

as per https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_per_template_section

       ipg : 10000         3
       rtt : 10000         4

, we define these as:

(3) (ipg) = If the global section of the YAML file includes cap_ipg : false, this line sets the inter-packet gap in microseconds.
(4) (rtt) = Should be set to the same value as ipg (microseconds).


This is false.

  1. ipg = inter-packet-gap -- this is actually about IDLE FRAME duration between Ethernet packets. (aka interframe spacing, interframe gap) -- https://en.wikipedia.org/wiki/Interpacket_gap

  2. I think by "ipg", we actually mean "inter-packet delay" -- (how long to delay pkt-to-pkt transmission when RTT is not at play for TCP control flow packets?), this is likely a non-industry accepted term since it's specific to TRex's implementation?

  3. rtt = round-trip-time -- this is different from ipg (and inter-packet delay) -- "the length of time it takes for a signal to be sent plus the length of time it takes for an acknowledgment of that signal to be received" -- https://en.wikipedia.org/wiki/Round-trip_delay_time


I have tested and confirmed this theory:

====
CASE A) (status quo)

if we send a TCP flow when RTT == IPG == 100ms

ipg : 100000
rtt : 100000

-> here it goes:

16:44:39.662082 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:44:39.761095 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
16:44:39.863706 (SNIP) length 854: 4.0.0.1.41668 > 5.0.0.1.15000: Flags [P.], ack 1006318459, win 57344, length 792
16:44:39.963630 (SNIP) length 413: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 351
16:44:40.061033 (SNIP) length 923: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 861
16:44:40.161048 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
16:44:40.261059 (SNIP) length 78: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 16
16:44:40.361073 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
16:44:40.460085 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
16:44:40.560104 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:44:40.660013 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:44:40.760028 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]

, we can see that the timestamp correctly increments by ~100ms for each packet (without much understanding, this is an implicit assumption without knowing enough...)

====
CASE B) rtt > ipd (a MORE realistic situation)

ipg : 10000
rtt : 100000

results in ->

16:49:13.627160 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
16:49:13.728070 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
         ^
16:49:13.827081 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
         ^
16:49:13.840082 (SNIP) length 854: 4.0.0.1.41668 > 5.0.0.1.15000: Flags [P.], ack 1006318459, win 57344, length 792
16:49:13.939894 (SNIP) length 413: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 351
16:49:13.948096 (SNIP) length 923: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 861
          ^
16:49:13.958097 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
          ^
16:49:13.968099 (SNIP) length 78: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 16
16:49:13.978100 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
16:49:14.078117 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
16:49:14.178129 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:49:14.188128 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:49:14.288043 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]

we can see here for all TCP control packets, the RTT is honoured, with 100ms delay
and for all DATA packets, 10ms increments happen!

====
CASE C) rtt < ipd (awkward, but possible)
(*this is a scenario where the RTT of the network is LESS (faster) than the server or client delay in sending sequential data packets ... fairly unlikely in the real world, but possible if the application artificially delays packets, or perhaps if the OS or network stack is throttling ... etc.)

ipg : 100000
rtt : 10000

and:

16:51:52.940052 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
          ^
16:51:52.949049 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
          ^
16:51:52.959047 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
          ^
16:51:53.064264 (SNIP) length 854: 4.0.0.1.41668 > 5.0.0.1.15000: Flags [P.], ack 1, win 57344, length 792
16:51:53.072161 (SNIP) length 413: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 351
16:51:53.170076 (SNIP) length 923: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [P.], ack 792, win 57344, length 861
         ^
16:51:53.269991 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
         ^
16:51:53.370003 (SNIP) length 78: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 16
         ^
16:51:53.469016 (SNIP) length 1506: 5.0.0.1.15000 > 4.0.0.1.41668: Flags [.], ack 792, win 57344, length 1444
16:51:53.479017 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]
16:51:53.489017 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:51:53.590032 (SNIP) length 62: 5.0.0.1.15000 > 4.0.0.1.41668: [|tcp]
16:51:53.600034 (SNIP) length 62: 4.0.0.1.41668 > 5.0.0.1.15000: [|tcp]

, as we can see, it's the reverse from CASE (B). RTT TCP control packets adhere to a 100ms delay, and TCP data stream pkts go for 10ms delay

TODO: NEEDS UDP TESTING


SO; I'd like to propose
A> remove all notions that "rtt == ipg"
B> properly explain RTT and what TRex does with it
C> change "ipg" to inter-packet-delay and properly define it (ipg must remain for backwards compatibility I guess...)

@mcallaghan-sandvine

This comment has been minimized.

Show comment
Hide comment
@mcallaghan-sandvine

mcallaghan-sandvine Sep 4, 2018

Contributor

This article (written by Cisco!) gives further context and terminology definitions that perhaps we can lean on and reference

https://www.cisco.com/c/en/us/about/security-center/network-performance-metrics.html

Contributor

mcallaghan-sandvine commented Sep 4, 2018

This article (written by Cisco!) gives further context and terminology definitions that perhaps we can lean on and reference

https://www.cisco.com/c/en/us/about/security-center/network-performance-metrics.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment