-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How are latencies in Up- and Downlink measured? #5
Comments
The clocks are synchronized at the beginning of the test. Adjusting for clock drift with a synchronization at the end of the test is on my todo list. You may currently observe some clock drift with longer test durations. |
Okay, so the assumption is then a symmetrical path regarding delays? Could you point me to the code where this is done? |
Yes, it assumes the delays are symmetric on idle. The code is here. This function returns an offset to convert server time into client time. |
On linux it's only a couple calls to sample TCP_INFO: this one runs out of band, not useful... https://www.measurementlab.net/tests/tcp-info/ this method could be used to hook some other tool that uses some common rust geturl or equivalent lib - https://linuxgazette.net/136/pfeiffer.html And these calls can be used on windows: https://learn.microsoft.com/en-us/windows/win32/winsock/sio-tcp-info just ULONG RttUs; sampled every 10-50ms would be a way of measuring inband. Right now crusader just measures the effectiveness of FQ, not the actual behavior of tcp flows. |
First of all, a big thanks for this great software, which works really nice!
In the graphs, the latency for Up and Down is shown.
I'm wondering how this is done considering that client and server are usually different machines with unsynchronized clocks?
The text was updated successfully, but these errors were encountered: