You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The metric k6_http_req_failed is always 1, rather than a counter, so we can't see a count via increase or rate.
There's a good chance I've misunderstood this metric - is it supposed to be a gauge indicating a scenario failure? Should we add our own error rate metric?
Build command:
go install go.k6.io/xk6/cmd/xk6@latest
xk6 build \
--with github.com/grafana/xk6-output-prometheus-remote@latest
Hi @Limess,
yes, a k6's rate like k6_http_req_failed is mapped to a gauge and, in this case, it is always expected to be one because you have expected_response, error_code and status tags that generate individual time series. You should aggregate them to get a unique representation.
For example, for average, you could use:
sum(sum_over_time(k6_http_req_failed_rate[1m])) by (name)
/
sum(count_over_time(k6_http_req_failed_rate[1m])) by (name)
Instead, if you don't care about these tags, you could disable them using the k6 option systemTags.
v0.0.7 is out, it has important updates. Please, upgrade.
The metric
k6_http_req_failed
is always 1, rather than a counter, so we can't see a count viaincrease
orrate
.There's a good chance I've misunderstood this metric - is it supposed to be a gauge indicating a scenario failure? Should we add our own error rate metric?
Build command:
Version:
xk6-output-prometheus-remote 0.06
The text was updated successfully, but these errors were encountered: