Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to access built-in metrics after the test completion #351

Closed
1 of 2 tasks
ppcano opened this issue Oct 24, 2017 · 3 comments · Fixed by #1768
Closed
1 of 2 tasks

How to access built-in metrics after the test completion #351

ppcano opened this issue Oct 24, 2017 · 3 comments · Fixed by #1768
Assignees
Milestone

Comments

@ppcano
Copy link
Contributor

ppcano commented Oct 24, 2017

Related issue at #321

Is there any way to access the built-in metrics in code? Not on a per-request basis, but the averages at the end of the test. I'm looking for a way to output statistics to TeamCity.

Based on the above Slack comment and other similar questions. I think there is a common case to easily access the aggregated metric results after the test completion.

@bennor
Copy link

bennor commented Oct 27, 2017

Basically all I was looking for was the ability to send build statistics to TeamCity at the very end of the test run. I'm open to other options (e.g. an option to get aggregated data as JSON output), but I like the default output from the tests, so what would be ideal for me would be something like this:

import { metrics } from "k6"; // Or somewhere else logical -- this is just to access the built in metrics

// A function that (if defined) runs _after_ the script has finished 
// i.e. when all iterations are done and data from all nodes in the cluster have been correlated
export function teardown() {
  const http_req_duration = metrics.http_req_duration.results(); // Just a method to extract the statistics
  console.log(`##teamcity[buildStatisticValue key='AverageDuration' value='${http_req_duration.avg}']`);
}

In the meantime I've worked around it by manually calculating the aggregated data myself (extracting the duration after each request), but this will obviously not work in a clustered scenario or with multiple iterations.

@ppcano
Copy link
Contributor Author

ppcano commented Oct 30, 2017

@bennor

  1. Below some examples to calculate the aggregated result with jq:

k6 run -o json=myscript-output.json myscript.js

// https://unix.stackexchange.com/a/249799
// average
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s 'add/length'

// min
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s min

// max
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s max

// median
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | sort -n|awk '{a[NR]=$0}END{if(NR%2==1)print a[int(NR/2)+1];else print(a[NR/2-1]+a[NR/2])/2}'


In the meantime I've worked around it by manually calculating the aggregated data myself (extracting the duration after each request), but this will obviously not work in a clustered scenario or with multiple iterations.

  1. I don't understand why this cannot work in a clustered scenario or with multiple iterations. You only have to wait for the k6 run completion to calculate the values.
  1. If you want the aggregated result to be accessible in code, you could request it on the setup and teardown feature

@liclac
Copy link
Contributor

liclac commented Oct 31, 2017

We could give #194 the ability to do this, the problem is that it'd have implications for clustered execution. Every k6 instance in a test would need to keep their samples in memory and push them to a "main" instance that could then call teardown(), which is a very different topology from "all instances are separate and push to the same datastore, then a random instance calls teardown()". Most importantly, it'd mean that either no instances can be lost during the test (if samples are transmitted at the end of a test) or the main instance can't (if they're pushed continuously and the "main" instance is responsible for committing to the datastore + executing teardown()).

Using jq for stream processing this way is a bad idea because a) the commands get really long, and b) the json collector was meant for debugging, not production use - it has a ridiculous amount of overhead for the sake of readability, and is really, really slow.

One way to solve this would be (as I mentioned in #321) to log in a binary format and offer commands to interact with this format. This is something we can make work in a sane and performant fashion, although a distributed test would end up with one file per node.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants