We'll keep track of commonly asked questions here. This list will grow over time.
First, designing a benchmark is hard, and although we've tried hard to make it as good as possible, we're sure there are areas to be improved. But secondly and most importantly, RoboHornet has community at its very heart, and we wanted to involve the broader community as early as possible so they can help shape its development from the beginning.
At this early alpha stage, the benchmark has been tested only on the most recent stable releases of Chrome, Firefox, Internet Explorer, Opera, and Safari. If you're using a different browser, it may not complete. If you are using one of those browsers, please ensure that pop-up blocking is disabled for robohornet.org. RoboHornet uses popups and in order to achieve better accuracy is written in a way that triggers most pop-up blockers.
Working on mobile is an explicit goal of RoboHornet as the importance of performance of mobile browsers continues to grow. At this early alpha stage, RoboHornet is not guaranteed to be compatible with mobile browsers.
RoboHornet uses the Benchmark.js framework to actually run and measure the tests in RoboHornet. Each of RoboHornet's tests is designed to theoretically take the same amount of time to run each time. Benchmark.js will run a given test as many times as necessary until it is confident that the time it has computed is close to the "true" time required for that test. This means that some tests that naturally exhibit some degree of variance can take a large number of runs to settle on the final "true" time.
The term 'micro-benchmark' is generally applied to benchmarks that are small, specially-made, and often not representative of real-world performance problems. Over time as browsers improve on them, these benchmarks can become too small to show meaningful performance differences. Although RoboHornet's benchmarks are small and specially made, they are different from a 'micro-benchmark' because they are directly motivated by real-world performance pain points and are designed to evolve over time as the browser landscape evolves.
RoboHornet's guidelines specify that every test in the suite should start from an observed, real world performance problem and then have a benchmark specifically created to "capture" that pain point succinctly and accurately. Over time the benchmark will be updated depending on how the browser landscape evolves to ensure it continues capturing the pain accurately. For example, a test might need to be updated to defeat micro-optimizations in new versions of browsers that make the test faster but don't improve the general performance problem. This is why we designed RoboHornet to be a living and dynamic benchmark.
Great! That's one of the reasons that RoboHornet is currently an alpha release. You should look into reporting the problem as a Technical Advisor, or by just e-mailing the discussion list (firstname.lastname@example.org).
There are a few things you can do to ensure it runs as accurately as possible:
That string identifies which version of the benchmark you're running. Although we endeavor to re-normalize the scores for each release of RoboHornet, perfect normalization is impossible. We want to remind people that ultimately the only truly accurate way to interpret the results is to compare the results of different browsers running the same version of the benchmark on the same hardware. That version number serves as a subtle reminder not to compare scores across different versions of the benchmark.
Last edited by jkomoros,