A benchmarking timer
The number of "moving parts" in a modern software stack means that program execution time is non-deterministic. As a result, performance evaluation should take a statistically rigorous approach, using the results of multiple iterations to reduce the impact of outliers.
This is a utility for adding statistical rigour to your program performance evaluations. It repeatedly executes a program for a set amount of time, then reports mean, lower and upper confidence values.
Here's how you get started:
$ srtime ./my_benchmark 95% confidence values from 37 iterations: 103.665 103.699 103.733
- Millisecond precision timing of programs.
- User defined amount of time to collect results for (e.g. 60 seconds), or a minimum number of iterations to perform (e.g. 100).
- Results can be displayed graphically using the
- User defined confidence intervals, output precision, and output format.
- Can act as a filter for timing critical sections of a program based on its output.
- Supports flushing the host system caches before every invocation of the target program.
For a list of all of the program features, see
sudo python setup.py install
- Source Code: http://github.com/ChrisCummins/srtime
- Issue Tracker: https://github.com/ChrisCummins/srtime/issues
If you are having issues, please get in touch: firstname.lastname@example.org.