Clone this wiki locally
We'll keep track of commonly asked questions here. This list will grow over time.
Why is RoboHornet currently an "Alpha" release?
First, designing a benchmark is hard, and although we've tried hard to make it as good as possible, we're sure there are areas to be improved. But secondly and most importantly, RoboHornet has community at its very heart, and we wanted to involve the broader community as early as possible so they can help shape its development from the beginning.
Why didn't the test complete?
At this early alpha stage, the benchmark has been tested only on the most recent stable releases of Chrome, Firefox, Internet Explorer, Opera, and Safari. If you're using a different browser, it may not complete. If you are using one of those browsers, please ensure that pop-up blocking is disabled for robohornet.org. RoboHornet uses popups and in order to achieve better accuracy is written in a way that triggers most pop-up blockers.
Why doesn't this work on my mobile browser?
Working on mobile is an explicit goal of RoboHornet as the importance of performance of mobile browsers continues to grow. At this early alpha stage, RoboHornet is not guaranteed to be compatible with mobile browsers.
Why do some tests take a long (or variable) amount of time to run?
RoboHornet uses the Benchmark.js framework to actually run and measure the tests in RoboHornet. Each of RoboHornet's tests is designed to theoretically take the same amount of time to run each time. Benchmark.js will run a given test as many times as necessary until it is confident that the time it has computed is close to the "true" time required for that test. This means that some tests that naturally exhibit some degree of variance can take a large number of runs to settle on the final "true" time.
Why do I get very different results on different hardware or operating systems?
How is RoboHornet different from 'micro-benchmarks'?
The term 'micro-benchmark' is generally applied to benchmarks that are small, specially-made, and often not representative of real-world performance problems. Over time as browsers improve on them, these benchmarks can become too small to show meaningful performance differences. Although RoboHornet's benchmarks are small and specially made, they are different from a 'micro-benchmark' because they are directly motivated by real-world performance pain points and are designed to evolve over time as the browser landscape evolves.
RoboHornet's guidelines specify that every test in the suite should start from an observed, real world performance problem and then have a benchmark specifically created to "capture" that pain point succinctly and accurately. Over time the benchmark will be updated depending on how the browser landscape evolves to ensure it continues capturing the pain accurately. For example, a test might need to be updated to defeat micro-optimizations in new versions of browsers that make the test faster but don't improve the general performance problem. This is why we designed RoboHornet to be a living and dynamic benchmark.
I found a flaw in the design of one of the tests (or the harness).
Great! That's one of the reasons that RoboHornet is currently an alpha release. You should look into reporting the problem as a Technical Advisor, or by just e-mailing the discussion list (email@example.com).
How can I ensure I run the most accurate results?
There are a few things you can do to ensure it runs as accurately as possible:
- Ensure your computer is up to date and any pending OS updates are installed.
- Ensure the browser you're testing is up to date.
- Restart your computer before running the test.
- Turn off all other programs.
- If your computer is a laptop, ensure it is plugged in.
- Turn off screen savers, display sleep, and display dim.
- Start the browser fresh before you begin testing and wait a minute or so before beginning the test (some browsers may perform background startup tasks).
- Only have a single tab open, pointing to the benchmark.
What's that RH-A1 in front of the final score?
That string identifies which version of the benchmark you're running. Although we endeavor to re-normalize the scores for each release of RoboHornet, perfect normalization is impossible. We want to remind people that ultimately the only truly accurate way to interpret the results is to compare the results of different browsers running the same version of the benchmark on the same hardware. That version number serves as a subtle reminder not to compare scores across different versions of the benchmark.