Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Iteration duration is calculated incorrectly when vus's are more than requests per second #1631

Closed
manojth opened this issue Sep 14, 2020 · 3 comments
Labels

Comments

@manojth
Copy link

manojth commented Sep 14, 2020

Environment

  • k6 version: k6 v0.27.1 (dev build, go1.14.5, darwin/amd64)
  • OS and version: MAC OS 10.14.6
  • Docker version and image, if applicable: NA

Expected Behavior

The Iteration duration should be calculated only on the basis of active VU's which have been used in the test.

Actual Behavior

When the VU's are more than rps, the iteration_duration values ( average, min, med etc) are showing very high values.
Assumption: It looks like while calculating the iteration_duration, when there are lot of VU's which are idle, the iteration_duration is shown more than what is actually required.

Steps to Reproduce the Problem

  1. For any rest end point, use k6 to performance test with following load criteria. Note that the number of VU's is much more than rps.
export let options = {
    vus: 10,
    rps: 1, /* global requests per second limit */
    duration: '30s',
  1. The results show that iteration_duration is very high.
    checks.....................: 100.00% ✓ 117  ✗ 0   
    data_received..............: 92 kB   2.4 kB/s
    data_sent..................: 56 kB   1.4 kB/s
    http_req_blocked...........: avg=35.89ms  min=0s    med=1µs   max=340.6ms  p(90)=116.78ms p(95)=123.12ms
    http_req_connecting........: avg=154.87µs min=0s    med=0s    max=912µs    p(90)=600.2µs  p(95)=636.3µs 
    http_req_duration..........: avg=1.1s     min=1.07s med=1.09s max=1.23s    p(90)=1.12s    p(95)=1.14s   
    http_req_receiving.........: avg=93.66µs  min=59µs  med=86µs  max=220µs    p(90)=130.2µs  p(95)=145.7µs 
    http_req_sending...........: avg=269.71µs min=117µs med=243µs max=523µs    p(90)=386.8µs  p(95)=414.4µs 
    http_req_tls_handshaking...: avg=34.84ms  min=0s    med=0s    max=308.42ms p(90)=115.79ms p(95)=122.17ms
    http_req_waiting...........: avg=1.1s     min=1.07s med=1.09s max=1.23s    p(90)=1.12s    p(95)=1.14s   
    http_reqs..................: 39      0.996525/s
    iteration_duration.........: avg=**8.87s**    min=**1.46s** med=9.96s max=10.23s   p(90)=10.02s   p(95)=10.03s  
    iterations.................: 39      0.996525/s
    vus........................: 1       min=1  max=10
    vus_max....................: 10      min=10 max=10
  1. Repeat the test with lower number of VU's.
export let options = {
    vus: 2,
    rps: 1, /* global requests per second limit */
    duration: '30s',

Results

    checks.....................: 100.00% ✓ 93  ✗ 0  
    data_received..............: 27 kB   863 B/s
    data_sent..................: 24 kB   782 B/s
    http_req_blocked...........: avg=13.43ms  min=0s    med=0s    max=296.76ms p(90)=1µs   p(95)=59.92ms
    http_req_connecting........: avg=24µs     min=0s    med=0s    max=454µs    p(90)=0s    p(95)=145µs  
    http_req_duration..........: avg=1.13s    min=1.07s med=1.09s max=1.57s    p(90)=1.19s p(95)=1.33s  
    http_req_receiving.........: avg=85.38µs  min=54µs  med=84µs  max=220µs    p(90)=103µs p(95)=127µs  
    http_req_sending...........: avg=197.19µs min=115µs med=151µs max=595µs    p(90)=307µs p(95)=436.5µs
    http_req_tls_handshaking...: avg=12.79ms  min=0s    med=0s    max=277.48ms p(90)=0s    p(95)=59.59ms
    http_req_waiting...........: avg=1.13s    min=1.07s med=1.09s max=1.57s    p(90)=1.19s p(95)=1.33s  
    http_reqs..................: 31      0.992745/s
    iteration_duration.........: avg=**1.97s**    min=**1.44s** med=1.99s max=2.48s    p(90)=2.11s p(95)=2.35s  
    iterations.................: 31      0.992745/s
    vus........................: 1       min=1 max=2
    vus_max....................: 2       min=2 max=2
@manojth manojth added the bug label Sep 14, 2020
@mstoykov
Copy link
Contributor

This is expected.
Unfortunately rps is not a great option and ... matches pretty badly with the way k6 doesn't really do ... requests or at least not really - it executes a script which than ... makes requests.
So, in reality, your VUs are always active and are going through the iteration it is just that when they make a request they will "block" and will wait for some time so we don't go over the configured rps. This has a lot of problems from the "this doesn't match what we do" it using more resources than required and in general in my testing keeping the rps around 5-10% below the configured one (which probably can be optimized somewhat). Also unfortunately the time it blocks "on" the rps option isn't actually calculated towards the http_req_blocked :(.

I would argue that given that we now have arrival rate executors they are a much better match and are probably what you need. With them you can say that you want X number of iterations to be started every Y timeUnits and so if you just have 1 HTTP request inside you will get what rps tries to do ... and fails IMO. Also if you want to have different rps for different requests and they don't need to happen one after the other you can just use multiple executors and the exec option.

I am not going to close it as I would argue given the above solution rps should now be deprecated and .. print a warning that the arrival rate should be used. I would also argue we should not support it in the new HTTP API ... whenever we actually get to it :(

@manojth
Copy link
Author

manojth commented Sep 17, 2020

Thanks for your detailed analysis MStoykov. Really appreciate it. Going forward, i will use the arrival rate executors.
and providing a warning for rps, would be great!

@na--
Copy link
Member

na-- commented Jan 13, 2021

Closing this in favor of grafana/k6-docs#187

@na-- na-- closed this as completed Jan 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants