Skip to content

Principles

EyalLavi edited this page Jan 21, 2019 · 3 revisions
  1. Relative over absolute. It's more useful to compare providers against each other consistently than to create a universal benchmark.
  2. Specific over generic. It's better to benchmark for your own use-case than attempt to capture all scenarios in a single test set.
  3. Usability over completeness. Benchmarking frequently and quickly is more important than spending time creating the perfect rules and data sets.
  4. Portability. The toolkit should be easy to use on a variety of platforms.
  5. Extensibility. The toolkit should be modular and support integration with additional components, especially for other languages.
  6. Automation. It should be possible to perform benchmarking regularly and programatically with minimum intervention.
  7. Transparency. Decisions we make (for example, normalisation rules) might unavoidably favour some providers over others. To mitigate against this, such decision will be explicitly and clearly documented.
  8. Annotation over documentation. Documentation should relate to code where possible.
  9. Pragmatism. We want to deliver maximum benefit with minimal effort, incrementally.
Clone this wiki locally