Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed samples and aggregate report #3528

Open
asfimport opened this issue Feb 6, 2015 · 1 comment
Open

Failed samples and aggregate report #3528

asfimport opened this issue Feb 6, 2015 · 1 comment

Comments

@asfimport
Copy link
Collaborator

@vlsi (Bug 57545):
I believe JMeter support of tracking failed samples can be
significantly improved.
Each and every manual suggests using "Aggregate Report" ([1]) or
friends, however it silently returns wrong data without any warning.

For instance: (sample, response time, status)
Sample1, 9 sec, OK
Sample1, 0 sec, ERR
Sample1, 0 sec, ERR

JMeter would show "average response time" as 3 seconds, and same for
percentiles (median should be 9 seconds, not 0).
However, it is common for failed requests to run much faster.
It does not matter very much how fast you can crash, but it does
matter how fast the successful responses are.

I see two problems here:

  1. Default configuration averages/computes quantiles for both OK and
    ERR responses. So users get wrong values in the report.
  2. There is no easy way to track OK and ERR separately. Well, one can
    add two copies of AggregateReport (one for success, another one for
    fails), however is that ever suggested in the documentation? Is that a
    good user experience? One would have to switch back and forth from one
    to another.

The easiest solution seems to change statistics key from "sampleLabel" to "sampleLabel,success", so successful and failure cases are computed in different buckets.

OS: All

@asfimport
Copy link
Collaborator Author

@pmouawad (migrated from Bugzilla):
Thanks for report.
Wouldn't it be better to ignore samplers in error for metrics related to response time ?
Although this could exclude the ones that timeouted which is a couter example of the reported bug.

Separating ok/ko as you propose, how would you compute error rate ? or did I misunderstand ?

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant