Benchmarking #2824

Closed
wants to merge 4 commits into
from

Conversation

5 participants

Added simple charting support to display benchmark results.

davidberneda added some commits Dec 20, 2012

Update test/benchmark/benchmarking_float32array_accesspatterns.html
Added chart script and a <canvas> element to display a chart with benchmark results.
Update test/benchmark/core/Float32ArrayAccessPatterns.js
Added chart output, horizontal bars, one for each benchmark suite.
Contributor

bhouston commented Dec 20, 2012

Thanks David. Very nice chart. :-)

Unfortunately the chart results don't match the benchmark results in the JavaScript console. The fastest in the JavaScript console is Float32CopyArray but my chart seems to suggest it is Float32ArrayFloat32ArrayCopyTest. I think it is just that the wrong parameter is being read to put into the chart.

Also in the JavaScript console it displays in terms of op/sec and generally in the 1000s, but your chart is displaying things in the range of 0-10 or something like that -- thus I really do think it is just charting the wrong variable.

(I just did a quick and simple chart to evaluate, and if everybody likes it and agrees, it can be expanded a lot.)

I think you're totally right.
I wrongly choose the "elapsed" measure (in seconds) instead of ops/sec. They don't seem to correlate.
Couldn't find benchmark.js docs to choose the right quantity to display.

Contributor

bhouston commented Dec 20, 2012

The docs are here btw: http://benchmarkjs.com/docs

Contributor

gero3 commented Dec 21, 2012

I think there is already a serious problem with the benchmark you posted becuase it doesn't start with the same numbers for each benchmark.

Contributor

bhouston commented Dec 21, 2012

@gero3 I cleaned up the benchmarks so that the inputs to each test are always consistent, no use of random() at all now. In this commit: bhouston/three.js@e964e04

Contributor

alteredq commented Dec 21, 2012

Just a sidenote: using random numbers when testing performance is not necessarily a bad thing.

If you do enough repetitions / samples, differences between different runs should compensate themselves out.

One possible advantage of using random numbers is that you should get more "real" performance profile. If you do use non-random inputs, there is a chance you'll measure performance profile for just one particular use case and also you may accidentally hit some sweet spot of optimizer (or conversely some bad spot).

One archetypal example of this are sorting functions, which for example should be tested for several different types of inputs (ordered, inverse ordered, random).

Plus with modern JS engines it's even more complicated because of runtime optimizations which depend on data values (e.g. using integers every run will get different performance than using floats every time and yet different performance when mixing between them intermittently).

Contributor

bhouston commented Jan 3, 2013

@davidberneda This PR is not being merged in because the charts generated by the code in this PR are incorrect.

Contributor

bhouston commented Jan 4, 2013

@davidbernada, I've merged in your changes into the latest version of the benchmarks, could you @davidbernada merge in my benchmark branch and then push it to yours so that we can still merge in your charts via this PR instead of me taking it over with a new PR?

https://github.com/bhouston/three.js/tree/benchmarking

Contributor

bhouston commented Feb 8, 2013

I think the charts do not add enough value for the dependencies that they introduce. Something simpler is probably preferred. Recommend closing this PR without merging.

@mrdoob mrdoob closed this Feb 9, 2013

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment