There's a new combinator: bcompare :: [Benchmark] -> Benchmark The first 'bench' in the list is the reference benchmark. All other benchmarks are compared against that reference. The user can specify to write the comparisons to a CSV file using the -r (or --compare) command line flag. The CSV file uses the following format: Reference,Name,% faster than the reference where the % is currently printed without precision (%.0f).
--HG-- extra : convert_revision : 9321235124f59e9f3cc699624c5d6bb560ddf77a
--HG-- extra : convert_revision : c5ff5c1fd5505422118b6a244200e2bb6776acfa
This stops the Config being passed around as an explicit parameter, which makes the code shorter and cleaner. I've used the mtl library, but all the ReaderT stuff is wrapped up in the new Criterion.Monad module, so it should be possible to swap the implementation (e.g. for transformers) without any trouble. One of the main complexities of making this change was to fix hPrintf to work with ReaderT Config IO, rather than IO. This seems to work, and isn't too horrific. It actually cleans up the code that uses the Config to decide whether to print something -- that is now nicer. --HG-- extra : convert_revision : df6ba3f1832d9837ac4c9c32e9c619996696374f
…ls for the mean and stddev for each test --HG-- extra : convert_revision : c324a76691693c74fac996617a9296fba7676107
…ot the graphs with a shared axis --HG-- extra : convert_revision : 8e8a48bcca1cb113a72d5fa20974772f871f2c98
… the same X axis scale The shared X axis is auto-scaled so that it encompasses the data from all the graphs. --HG-- extra : convert_revision : 275fbc67f31851b46f771f6f06017e372c882846