Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct average latency calculation in best server determination. #199

Closed

Conversation

nelsonjchen
Copy link

Reverts part of commit c1b9a0d. I guess some exploratory or debugging edits
made it into the commit.

Three samples are run, summed, and divided by six. It should be three.

As a result, the latency results were halved. The "best" server was still
chosen but the ping time displayed was half of the real average result.

Reverts part of commit c1b9a0d. I guess some exploratory or debugging edits
made it into the commit.

Three samples are run, summed, and divided by six. It should be three.

As a result, the latency results were halved. The "best" server was still
chosen but the ping time displayed was half of the real average result.
@nelsonjchen
Copy link
Author

If the real speedtest widget or app also divides it by six or something, then I guess it doesn't consider "ping" to be the RTT and is dividing further to get (I guess?) the trip time.

@sivel
Copy link
Owner

sivel commented Nov 15, 2015

This was purposeful.

@sivel sivel closed this Nov 15, 2015
@nelsonjchen
Copy link
Author

Thanks for clarifying. I wasn't sure if that was supposed to be a RTT or not.

@nelsonjchen nelsonjchen deleted the latency_test_incorrect_avg branch November 15, 2015 22:48
nelsonjchen added a commit to nelsonjchen/speedtest-rs that referenced this pull request Nov 15, 2015
Repository owner locked and limited conversation to collaborators Nov 30, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
2 participants