Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unreliable test results #1

Open
mathiasbynens opened this issue Dec 31, 2010 · 3 comments
Open

Unreliable test results #1

mathiasbynens opened this issue Dec 31, 2010 · 3 comments

Comments

@mathiasbynens
Copy link

If you mean to run this benchmark across different browsers, you should know there might be some problems with the time measurement technique you’re using. Read http://calendar.perfplanet.com/2010/bulletproof-javascript-benchmarks/ for a detailed explanation.

Why not use Benchmark.js for your test case? Or just take the easy road and use http://jsperf.com/ :)

@shazow
Copy link
Owner

shazow commented Dec 31, 2010

I haven't tried Benchmark.js, thanks for the tip!

Re jsperf: http://news.ycombinator.com/item?id=2054456
Re significant results: http://news.ycombinator.com/item?id=2054802

tl;dr: You're right; anecdotally I found the trend to be consistent across browsers (though primarily tested on Chrome) and when run multiple times.

Would you be up for improving the benchmark to run multiple iterations and average the results?

@shazow
Copy link
Owner

shazow commented Jan 3, 2011

Perhaps I'm missing something from the documentation, but I was wondering how you would suggest running this test using benchmark.js?

I want to benchmark the time it takes to create a datastructure, write to it a bunch of times, and read from it a bunch of times separately, as well as give a total score based on those three benchmarks, per datastructure.

The best thing I can think of based on the benchmark.js API is to create multiple permutations of the tests combinations:

  • Create
  • Create + write
  • Create + write + read

These operations couldn't be tested separately on the same instance because the first-write might have a significantly different performance than the n writes afterwards on lazy datastructures (which is part of what I want to benchmark, since my real-life scenario the datastructure only gets written once).

Would be nice if there was a way to form test groups that get executed together on the same preparation context but separately from other groups.

@shazow
Copy link
Owner

shazow commented Jan 3, 2011

Oops that wasn't supposed to be closed, my bad.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants