Future: performance testing

lfryc edited this page Feb 23, 2012 · 4 revisions

Server side performance testing is painful,

ideally should be user need to define only what he wants to test (how to generate load).

Areas discussed

Following parts needs to be implemented:

  • distribution
    • JClouds - ssh to machine and run script
    • binary distribution (no configuration)
  • performance client (generating load)
  • run as client
  • in-container
  • measurement
  • generic API for collecting
  • number of requests per time
  • JConsole metrics (CPU usage, heap size, ...)
  • generated bandwidth
  • diagnostic tool
  • how to detect that component performs bad?
  • review tool
  • how to plot graph?
  • how to offer results for review?
  • how to compare results with previous runs?
  • continuous integration
  • controller
  • warm-up phase

Proposal

Ideally, user needs to define only "what to test" - PerformanceClient. There should be pre-defined scenarios "how to test" - Controller.

Raw API: https://gist.github.com/1892743

  • data from previous request's response as input of next request
  • controllers written using eventing model similar to Arquillian SPI
  • slow startup (gradual load increase)

Request generation

Test suite (request URLs and parameters) needs to be defined first.

Selenium functional tests could be used as input generating performance suite descriptor.

In-container tests

In-container tests should be evolutionary:

  • requests generated inside of container, hitting API methods
  • measures components in separation - without influence of other layers (like view)

White boards

Areas discussed

schema

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.