Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contradiction between tests results and browserscope #127

Closed
LLyaudet opened this issue May 25, 2013 · 8 comments

Comments

@LLyaudet
Copy link

commented May 25, 2013

Hi,

Thanks for creating jsperf :)

It is somewhat surprising to create a revision, run tests once and see differences between the values in the last column and the values in browserscope.

While the answer is here :
http://jsperf.com/faq#browserscope
it should definitely be mentionned (at least a link to http://jsperf.com/faq#browserscope with a warning) between the test runner section and the browserscope section because :

  • it appears like a bug or gives the feeling performance is not correctly evaluated, and most people will not check the FAQ to feel differently...
  • the rule to submit to browserscope the lower limit of the confidence interval is debatable (I join a screen capture where it changes the order of the results).

It could be interesting to have an advanced FAQ which gives possible explanations why some tests have bigger error margin than others.

Best regards,
Laurent Lyaudet

bugjsperf

@jdalton jdalton closed this Jul 15, 2013

@LLyaudet

This comment has been minimized.

Copy link
Author

commented Jul 21, 2013

Why do you close an issue without giving any reason?
If you have a valid reason to close this issue, you should be able to write it down.

@jdalton

This comment has been minimized.

Copy link
Collaborator

commented Jul 21, 2013

My bad, it's covered in the FAQ: http://jsperf.com/faq#browserscope

@LLyaudet

This comment has been minimized.

Copy link
Author

commented Jul 22, 2013

You didn't read my first report, did you ?
I already mention the FAQ but ask for a better warning to the user on the page of the test.
If the user is already looking at the FAQ, he will surely find what is looking for (no need for green highlight).
What about my other proposition for an advanced FAQ which gives an idea of the reasons behind that choice of the lower limit of the confidence interval?

@jdalton

This comment has been minimized.

Copy link
Collaborator

commented Jul 22, 2013

You didn't read my first report, did you ?

Ah sorry, I'm trying to close old stale issues we have no interest in addressing.

We chose the lower limit because its better to side on the lowest possible value as the true value can be anywhere in between. This way we don't artificially inflate browserscope results with outliers.

@LLyaudet

This comment has been minimized.

Copy link
Author

commented Jul 22, 2013

I don't think 2 month is old but at least you give a frank reason.
Since you're giving only the excuse of outliers, can you explain it further AND give the way you compute the error margin?

@jdalton

This comment has been minimized.

Copy link
Collaborator

commented Jul 22, 2013

Browserscope has no concept of margin of error, it only sees a reported number, so we handle that for it. Margin of error and other stats are calculated in this snippet of code.

@LLyaudet

This comment has been minimized.

Copy link
Author

commented Jul 23, 2013

Ok you're not answering the first part of my question. The snippet of code looks like bullshit.
critical = tTable[Math.round(df) || 1] || tTable.infinity;
Is that logical || ?

@jdalton

This comment has been minimized.

Copy link
Collaborator

commented Jul 23, 2013

Is that logical ||

Yap ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.