Skip to content

Commit

Permalink
Formatting.
Browse files Browse the repository at this point in the history
  • Loading branch information
iafonov committed May 29, 2012
1 parent ae7f75f commit 8a80e5b
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ In the application I'm working on now we have ~150 scenarios and we are using se

Recently I've started thinking about possible ways of reducing suite running time and one of the ideas that crossed my mind was using statistical analysis to re-arrange test suite and try to predict tests that are going to fail and allow them to [fail faster](http://en.wikipedia.org/wiki/Fail-fast). For now we have collected data from ~500 failed builds and after processing it I've got list of top 5% of most frequent failing tests, which we are running in a separate run before running the whole test suite. Running 7-8 scenarios consumes significantly lower time than running the whole test suite.

If you're good at statistics and math you can build interesting _conspiracy_ theories and even try to use machine learning techniques to analyze the data. I have used the most obvious and straightforward method to rank failing tests - the most recent failure will give test higher score so 10 subsequent test failures five month ago would be less important than 2 failures in recent runs. Assuming that builds are enumerated each test is given `build\_number/builds_count` points for failure and after summing and sorting tests by this ranking I get list of tests that are most likely are going to fail.
If you're good at statistics and math you can build interesting _conspiracy_ theories and even try to use machine learning techniques to analyze the data. I have used the most obvious and straightforward method to rank failing tests - the most recent failure will give test higher score so 10 subsequent test failures five month ago would be less important than 2 failures in recent runs. Assuming that builds are enumerated each test is given `build_number/builds_count` points for failure and after summing and sorting tests by this ranking I get list of tests that are most likely are going to fail.

Even if you're not going to re-arrange tests - having such statistics is very useful thing itself. Test that fails frequently usually could mean two important things:

Expand Down

0 comments on commit 8a80e5b

Please sign in to comment.