Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSV/Json output is broken when running 5 parallel scripts #163

Closed
andreiopr opened this issue Nov 4, 2022 · 3 comments
Closed

CSV/Json output is broken when running 5 parallel scripts #163

andreiopr opened this issue Nov 4, 2022 · 3 comments
Labels
bug Something isn't working

Comments

@andreiopr
Copy link

andreiopr commented Nov 4, 2022

Brief summary

CSV/Json output is broken when running a script on 5 pods

Some of the output rows are added in the middle or at the end of an existing row.
E.g. http_req_waiting,1666866258,79.782394,,true,,GET,https://appsevus,1666866258,0.000000,,,,,,,,,instance_id=2&job_name=k6-sample-2 Notice that vus was inserted in the middle of an URL
I think the issue occurs because all pods are trying to write in the same file at the same time.

A solution would be to provide a way to generate a separate output for every pod

k6-operator version or image

latest (8.0.0rc3)

K6 YAML

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: k6-sample
spec:
  parallelism: 5
  script:
    volumeClaim:
      name: k6-pvc
      file: test.js
  arguments: --out csv=/test/Test.csv
  separate: true

Other environment details (if applicable)

No response

Steps to reproduce the problem

  • run a k6 script using the k6 operator with parallelism: 5 and output set for csv --out csv=/test/Test.csv
  • open the csv output
  • search for vus until you find a row where vus is added in the middle or at the end of the row

Expected behaviour

All rows in the output are displayed correctly.

Actual behaviour

Output is broken, some rows are added over other rows.

@andreiopr andreiopr added the bug Something isn't working label Nov 4, 2022
@yorugac
Copy link
Collaborator

yorugac commented Nov 4, 2022

@andreiopr thanks for the issue but AFAIS, this is actually an expected behaviour. You're trying to write to one CSV file in PVC (mounted as /test folder) instead of one file per runner, e.g. --out csv=Test.csv. Currently, in case of CSV and JSON it's responsibility of a user of k6-operator to gather the files from different runners and combine them in some way.

I guess we could think on how best to parameterize the name of the file so that you can store all .CSVs in one folder... something like --out csv=/test/output/runner-$i.csv. One solution would be something similar to the env var issue in #162

If you see some other behavior of k6-operator here, please describe it as a feature request. But without processing metrics themselves: k6-operator does not process metrics - that must be done by outside tools. Actually, the quickest way to achieve the effect of metrics from all pods in one place is to use other outputs, e.g. to InfluxDB or Prometheus.

@yorugac yorugac closed this as completed Dec 14, 2022
@patenvy1234
Copy link

hey --out csv=/test/runner.csv is working fine but --out csv=/test/fins/runner is giving me error no such file or directory when I do have directory named fins in my volume

@patenvy1234
Copy link

and what is this runner-$i here how to pass unique i

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants