Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Benchmark unfairly excludes compilation time. #37
As I've looked at Chrome timelines and thought about it, I have some concerns about the benchmark. I think it's somewhat biased in favor of optimize-js because optimize-js moves some CPU time from execution to compilation, but the benchmark only measures execution.
Basically, unoptimized code does a quick parse during the compilation phase, and then does a slow/complete parse during the execution phase (along with the actual execution of the code, obviously). After optimize-js has been run, the compilation phase does a slow/complete parse, and the execution phase just runs the code. But since the benchmark measures the time between executing the first line and the last line of the script, it is measuring only the execution phase, which means that the time increase in the compilation phase gets lost. I confirm this by looking at Chrome timelines; after optimize-js runs, the compilation phase goes up considerably, but the benchmark just reports the execute time.
I think the fairest benchmark is compilation + first execution, as this is what most pages care about for first load. What I don't know is how to measure that. Here are some ideas, all problematic:
Does this make sense? Do you have other ideas how to measure this (or other thoughts)?
So you've highlighted an important issue here; there's a measurement that the benchmark is not taking into account and this does have the potential to unfairly inflate optimize-js's numbers.
As you say this would get muddied by network overhead, which would make the tests inconsistent and unreliable. You could use a cached version of the script, but the problem is that some engines may pre-JIT a cached script (I'm told V8 does this).
Due to all these issues, I'd say your first suggestion is probably the best one, but we should do the following:
I think reporting both numbers is important to ensure we don't mislead people on the benefits of optimize-js, so we should definitely do this.
/cc @toddreifsteck who helped walk me through some of the tricky bits of this :)
Oooooh, I love the idea of a blob/dataURL; hadn't occurred to me, and I am agreed on all counts about the imperfections of the three ideas I proposed. I didn't even know about using blobs as URLs; that's really neat!
And (perhaps obviously) I am +1 that this is important to do to make sure the numbers are fair. Thanks!
OK, I believe I have found a better system. By creating
I set a mark right before
I'll update the benchmark and post new numbers and a new version of optimize-js as soon as I can. Testing so far indicates my numbers were indeed overly optimistic, but optimize-js still comes out on top for the most part. Thank you for raising this issue!