Skip to content
This repository has been archived by the owner on Feb 1, 2024. It is now read-only.

Update tileer load test script #1770

Merged
merged 4 commits into from Mar 25, 2022
Merged

Conversation

jwalgran
Copy link
Contributor

@jwalgran jwalgran commented Mar 24, 2022

Update the tile request load testing scripts to

  1. Allow running multiple sets of requests in parallel
  2. Break the cache for each set of parallel requests to make the load as aggressive as possible

Connects #1776

Demo

> VU_MULTIPLIER=10 \
> CACHE_KEY_SUFFIX=$(echo $(date) | sha1sum | cut -c1-8) \
> docker-compose -f docker-compose.yml run --rm \
>   k6 run /scripts/zoom_rio_de_janerio_with_contributor_filter.js

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: /scripts/zoom_rio_de_janerio_with_contributor_filter.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 10m30s max duration (incl. graceful stop):
           * default: 100 iterations shared among 100 VUs (maxDuration: 10m0s, gracefulStop: 30s)


running (00m16.3s), 000/100 VUs, 100 complete and 0 interrupted iterations
default ✓ [======================================] 100 VUs  00m16.3s/10m0s  100/100 shared iters

     ✓ is status 200

     checks.........................: 100.00% ✓ 950       ✗ 0
     data_received..................: 3.5 MB  213 kB/s
     data_sent......................: 325 kB  20 kB/s
   ✓ failed requests................: 0.00%   ✓ 0         ✗ 950
     http_req_blocked...............: avg=29.13ms  min=88ns     med=264ns    max=1.02s    p(90)=164.73ms p(95)=196.82ms
     http_req_connecting............: avg=10.99ms  min=0s       med=0s       max=161.78ms p(90)=82.33ms  p(95)=98.8ms
     http_req_duration..............: avg=2.24s    min=196.15ms med=2.38s    max=3.93s    p(90)=3.35s    p(95)=3.59s
       { expected_response:true }...: avg=2.24s    min=196.15ms med=2.38s    max=3.93s    p(90)=3.35s    p(95)=3.59s
     http_req_failed................: 0.00%   ✓ 0         ✗ 950
     http_req_receiving.............: avg=101.85µs min=20.42µs  med=83.64µs  max=2.18ms   p(90)=164.75µs p(95)=208.28µs
     http_req_sending...............: avg=141.91µs min=23.51µs  med=102.45µs max=2.56ms   p(90)=262.91µs p(95)=360.88µs
     http_req_tls_handshaking.......: avg=10.55ms  min=0s       med=0s       max=197.98ms p(90)=72.39ms  p(95)=97.47ms
     http_req_waiting...............: avg=2.24s    min=195.6ms  med=2.38s    max=3.93s    p(90)=3.35s    p(95)=3.59s
     http_reqs......................: 950     58.341884/s
     iteration_duration.............: avg=10.59s   min=4.67s    med=9.99s    max=16.24s   p(90)=15.25s   p(95)=16.1s
     iterations.....................: 100     6.141251/s
     vus............................: 9       min=9       max=100
     vus_max........................: 100     min=100     max=100

Testing Instructions

  • cd into load-tests
  • Run the example from the README and verify that it completes
VU_MULTIPLIER=10 \
CACHE_KEY_SUFFIX=$(echo $(date) | sha1sum | cut -c1-8) \
docker-compose -f docker-compose.yml run --rm \
  k6 run /scripts/zoom_rio_de_janerio_with_contributor_filter.js
  • Repeat the test run with VU_MULTIPLIER=20. You should see higher latency and some failed responses
  • View the staging CloudWatch dashboard to verify the increases in request, latency, and DB CPU.

Checklist

  • fixup! commits have been squashed
  • CI passes after rebase
  • CHANGELOG.md updated with summary of features or fixes, following Keep a Changelog guidelines

@rajadain
Copy link
Contributor

Taking a look

Copy link
Contributor

@rajadain rajadain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 tested. Tried with 10, 20, and 30 virtual users. Got 0%, 2%, and 20% failed requests respectively. This is a great setup, hopefully can inform future scaling decisions.

load-tests/zoom_rio_de_janerio_with_contributor_filter.js Outdated Show resolved Hide resolved
const url = entry.request.url.replace(
pattern,
`${cacheKey}${__ENV.CACHE_KEY_SUFFIX}/$2/$3/$4.pbf?$5`
`${cacheKey}-vu-${__VU}-${__ENV.CACHE_KEY_SUFFIX}/$2/$3/$4.pbf?$5`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It took me a while to figure out where __VU was defined: https://k6.io/docs/using-k6/execution-context-variables/#_vu-and-_iter-discouraged. Seemingly discouraged by the docs, but works as expected here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed the discouragement, too, but continued using __VU since it was already used successfully elsewhere in the script.

@rajadain rajadain assigned jwalgran and unassigned rajadain Mar 25, 2022
@rajadain
Copy link
Contributor

Also interesting seeing the staging load metrics:

image

For higher VUs, the ramp up was much steeper.

The tile load test uses k6 virtual users to run different batches of requests
with an increasing delay. To simulate multiple users in parallel we add a
`VU_MULTIPLIER` variable that sets the desired number of parallel executions.
@jwalgran jwalgran force-pushed the feture/jcw/tile-loat-test-updates branch from 66bd1be to 95fd45c Compare March 25, 2022 19:44
Adding the VU index into the cache key attempts to simulate a tile fetching
situation that is a resource intensive as possible, where each batch of tile
requests from each parallel user always misses the cache.
The path to the script needs a leading "/"
@jwalgran jwalgran force-pushed the feture/jcw/tile-loat-test-updates branch from 95fd45c to 4b8df50 Compare March 25, 2022 19:49
@jwalgran
Copy link
Contributor Author

Thanks for the review.

@jwalgran jwalgran merged commit b35c5b8 into develop Mar 25, 2022
@jwalgran jwalgran deleted the feture/jcw/tile-loat-test-updates branch March 25, 2022 19:59
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants