You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Kyverno 1.10.3 shows high latencies when handling a large number of requests. The expected behavior is that Kyverno adds a maximum of a few seconds of overhead, and scales well under load.
I performed tests on a large machine, with 1-6 replicas, but am also able to reproduce and troubleshoot the issues on my local system with a single replica.
The setup was as follows:
Configure all pod security policies in Enforce mode with the following match clause and removal of the precondition:
I ran a modified version of the load test at: https://github.com/kyverno/load-testing/tree/main/k6. The test creates pods with the label app: k6-test and expects a 400 response which corresponds to a blocked requests.
I have searched other issues in this repository and mine is not recorded.
The text was updated successfully, but these errors were encountered:
JimBugwadia
added
enhancement
New feature or request
triage
Default label assigned to all new issues indicating label curation is needed to fully organize.
labels
Sep 5, 2023
Kyverno Version
1.10.2
Description
Fixes and Changes
Here are the optimizations made (I will create separate issues for each with details):
Details
Kyverno 1.10.3 shows high latencies when handling a large number of requests. The expected behavior is that Kyverno adds a maximum of a few seconds of overhead, and scales well under load.
I performed tests on a large machine, with 1-6 replicas, but am also able to reproduce and troubleshoot the issues on my local system with a single replica.
The setup was as follows:
Enforcemode with the following match clause and removal of the precondition:Scale down all controllers except the Kyverno admission controller.
Use the flags:
I ran a modified version of the load test at: https://github.com/kyverno/load-testing/tree/main/k6. The test creates pods with the label
app: k6-testand expects a 400 response which corresponds to a blocked requests.Here are the numbers:
avg=2.77s; 99.90% success✗ verify response code of POST is 400 ↳ 99% — ✓ 999 / ✗ 1 █ teardown checks.........................: 99.90% ✓ 999 ✗ 1 data_received..................: 2.3 MB 80 kB/s data_sent......................: 380 kB 13 kB/s http_req_blocked...............: avg=21.1ms min=1.29µs med=2.75µs max=378.49ms p(90)=9.21ms p(95)=200.16ms http_req_connecting............: avg=8.91ms min=0s med=0s max=295.54ms p(90)=392.17µs p(95)=85.6ms http_req_duration..............: avg=2.77s min=106.31ms med=2.47s max=10s p(90)=4.79s p(95)=5.48s { expected_response:true }...: avg=2.76s min=106.31ms med=2.47s max=9.37s p(90)=4.78s p(95)=5.47s ...avg:=6.85s; 72.59% success✗ verify response code of POST is 400 ↳ 72% — ✓ 1452 / ✗ 548 █ teardown checks.........................: 72.59% ✓ 1452 ✗ 548 data_received..................: 3.9 MB 68 kB/s data_sent......................: 822 kB 14 kB/s http_req_blocked...............: avg=39.57ms min=1.2µs med=2.79µs max=710.22ms p(90)=201.75ms p(95)=317.29ms http_req_connecting............: avg=16.12ms min=0s med=0s max=521.14ms p(90)=81.01ms p(95)=122.15ms http_req_duration..............: avg=6.85s min=209.3ms med=7.09s max=12.82s p(90)=10s p(95)=10s { expected_response:true }...: avg=5.62s min=209.3ms med=5.58s max=12.53s p(90)=9s p(95)=9.55sI made a number of optimizations (detailed in issue list above). Here are the results with the optimized image:
avg=247.29ms; 100% passavg=671.02ms; 100% passSlack discussion
No response
Troubleshooting
The text was updated successfully, but these errors were encountered: