-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strange behavior of the dashboard. #103
Comments
Have you tried the latest version of the dashboard? |
@zzhao2010 Do you still have the same problem? |
Sorry I can not share test that produced screen above because it contains private info. I tried to make reduced test case with example from https://test.k6.io/ with such options
It has the same total request and p95 response time count when time range is "last 3 hours" here: And when I choose shorter time range it transforms into: Hope this will help. |
🆗 , I let me check |
@jwcastillo Looks like the issue was fixed with the latest version. |
Hi @zzhao2010, may I know how do you solve your issue? Coz I am also using v0.2.0 but still the same. |
@soolch are you sure you're using the latest version? Did you pull the latest commit from the |
Hi @codebien, I have tried it once again, with the latest k6 binary, following the k6 documentation as it has updated that this is the official dashboard. But the issue still happens.
But if i reduce my total test duration to 5m then the result shows correctly.
|
@jwcastillo can you take a look into it, please? |
yes, I take this |
Hi @jwcastillo, may I know are you able to simulate the same result at your side. |
Hi @jwcastillo, would it be because of this |
Hi @soolch, |
Hi @codebien, I didn't. Just that i read this stale option which also say 5mins. And the when i try search the result in the promtheus, those that are more than 5mins will be disappeared, which cause the grafana result incorrect. |
Firstly thanks for sharing these great dashboards for visualization. They look awesome.
On the other hand, I saw strange behavior while I was testing the dashboards with my test cases, and I do have question about the data accuracy as the data reporting on the dashboards doesn't look to align with the test result on the command line.
Let's have the 1st metric "Request Made" on the "Test Result" dashboard as an example. There were 2 values reporting, which is quite confusing. And neither of these values reflected the accurate number of requests being generated over the test case.
And if you take a look at the P95 Response Time metric on the dashboard, it was 3x faster than the p95 response time reported in the test summary on command line side.
The text was updated successfully, but these errors were encountered: