Skip to content
This repository has been archived by the owner on Jul 22, 2018. It is now read-only.

Best Practices for Benchmarking and Performance Analysis in the Cloud

Benjamin Oakes edited this page Nov 15, 2013 · 23 revisions
  • Robert Barnes, AWS

(slides will be available)

  • has a background with aerospace measurements (first job?)

  • Lots of ways to measure, have to think about calibration, accuracy, relevance, correlation of results with other measurement tools

  • Example of measuring the stage

  • Best benchmark: your app

  • Benchmarking in the cloud is different because of layers of abstraction -> more variability

  • Use a good AMI -- some vary a lot, but highly tested ones don't

  • Comparing on-premise vs cloud (same benchmarks

  • Choosing a benchmark (geekbench, ...)

  • Know what you're actually measuring with your tool

  • How do you know when you're done?

Tests

  • 10 instances...
  • geekbench (blackbox)
  • Testing at scale means you might have to do some thinking about data storage and parsing
  • Filesystem, system calls, etc can greatly influence results of CPU, etc.
  • SPEC CPU2006 http://spec.org has results
  • Think about costs when running tests -- if you don't need it, don't measure it, and remove any iterations you don't really need (e.g. because of running on multiple instances)
  • SPEC has low COV (coefficient of variance) because of an engineering focus on repeatability
  • Fast-completing benchmarks are like measuring the stage with the width of a tape measure
  • Accuracy of results depends on cost of accuracy (big cost, need more accuracy)

A crowd-sourced conference wiki!
Working together is better. :)




Clone this wiki locally