diff --git a/tests/results/dp-perf/2.2.0/2.2.0-oss.md b/tests/results/dp-perf/2.2.0/2.2.0-oss.md new file mode 100644 index 0000000000..cb1b27236e --- /dev/null +++ b/tests/results/dp-perf/2.2.0/2.2.0-oss.md @@ -0,0 +1,93 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 9fbef714ea22a35c4f1a8c97bd5b4e406ae0c1e9 +- Date: 2025-10-21T10:57:37Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 16 +- RAM per node: 65851524Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- 4 out of 5 tests showed slight latency increases, consistent with the trend noted in the 2.1.0 summary +- The latency differences are minimal overall, with most changes under 1%. +- The POST method routing increase of ~2.2% is the most significant change, though still relatively small in absolute terms (~21µs). +- All tests maintained 100% success rates with similar throughput (~1000 req/s), indicating that the slight latency variations are likely within normal performance variance. + +## Test1: Running latte path based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 925.889µs +Latencies [min, mean, 50, 90, 95, 99, max] 681.943µs, 926.463µs, 901.993µs, 1.011ms, 1.053ms, 1.244ms, 30.638ms +Bytes In [total, mean] 4770000, 159.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test2: Running coffee header based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.98 +Duration [total, attack, wait] 30.001s, 30s, 905.82µs +Latencies [min, mean, 50, 90, 95, 99, max] 733.55µs, 951.898µs, 926.202µs, 1.037ms, 1.082ms, 1.248ms, 24.506ms +Bytes In [total, mean] 4800000, 160.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test3: Running coffee query based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 885.866µs +Latencies [min, mean, 50, 90, 95, 99, max] 742.259µs, 965.539µs, 933.535µs, 1.04ms, 1.087ms, 1.345ms, 26.261ms +Bytes In [total, mean] 5040000, 168.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test4: Running tea GET method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.98 +Duration [total, attack, wait] 30.001s, 30s, 879.736µs +Latencies [min, mean, 50, 90, 95, 99, max] 732.423µs, 938.723µs, 917.416µs, 1.022ms, 1.066ms, 1.241ms, 21.039ms +Bytes In [total, mean] 4710000, 157.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test5: Running tea POST method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 880.839µs +Latencies [min, mean, 50, 90, 95, 99, max] 725.559µs, 962.748µs, 938.978µs, 1.053ms, 1.098ms, 1.261ms, 23.289ms +Bytes In [total, mean] 4710000, 157.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/dp-perf/2.2.0/2.2.0-plus.md b/tests/results/dp-perf/2.2.0/2.2.0-plus.md new file mode 100644 index 0000000000..bec6f79830 --- /dev/null +++ b/tests/results/dp-perf/2.2.0/2.2.0-plus.md @@ -0,0 +1,96 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 9fbef714ea22a35c4f1a8c97bd5b4e406ae0c1e9 +- Date: 2025-10-21T10:57:37Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 16 +- RAM per node: 65851524Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Average latency increased across all tests +- Largest Increase: Header-based routing (+76.461µs, +8.60%) +- Smallest Increase: Path-based routing (+28.988µs, +3.26%) +- Average Overall Increase: ~51.1µs (+5.69% average across all tests) +- Most Impacted: Header and query-based routing (8.60% and 5.91% respectively) +- Method Routing: GET and POST both increased by ~5.3% +- All tests maintained 100% success rate, similar throughput and similar max latencies + +## Test1: Running latte path based routing + +```text +Requests [total, rate, throughput] 30000, 1000.09, 1000.06 +Duration [total, attack, wait] 29.998s, 29.997s, 893.093µs +Latencies [min, mean, 50, 90, 95, 99, max] 702.667µs, 917.554µs, 892.32µs, 1.016ms, 1.066ms, 1.254ms, 21.001ms +Bytes In [total, mean] 4740000, 158.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test2: Running coffee header based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 883.984µs +Latencies [min, mean, 50, 90, 95, 99, max] 752.053µs, 964.976µs, 939.422µs, 1.067ms, 1.123ms, 1.313ms, 16.259ms +Bytes In [total, mean] 4770000, 159.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test3: Running coffee query based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 916.972µs +Latencies [min, mean, 50, 90, 95, 99, max] 745.707µs, 955.274µs, 931.109µs, 1.052ms, 1.102ms, 1.287ms, 17.84ms +Bytes In [total, mean] 5010000, 167.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test4: Running tea GET method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.01, 999.98 +Duration [total, attack, wait] 30.001s, 30s, 938.936µs +Latencies [min, mean, 50, 90, 95, 99, max] 723.854µs, 955.401µs, 930.464µs, 1.057ms, 1.114ms, 1.306ms, 18.287ms +Bytes In [total, mean] 4680000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +## Test5: Running tea POST method based routing + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.01 +Duration [total, attack, wait] 30s, 29.999s, 888.406µs +Latencies [min, mean, 50, 90, 95, 99, max] 736.512µs, 956.475µs, 925.958µs, 1.049ms, 1.105ms, 1.293ms, 21.232ms +Bytes In [total, mean] 4680000, 156.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/longevity/2.2.0/2.2.0-oss.md b/tests/results/longevity/2.2.0/2.2.0-oss.md new file mode 100644 index 0000000000..a327c7c81f --- /dev/null +++ b/tests/results/longevity/2.2.0/2.2.0-oss.md @@ -0,0 +1,92 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: e4eed2dad213387e6493e76100d285483ccbf261 +- Date: 2025-10-17T14:41:02Z +- Dirty: false + +GKE Cluster: + +- Node count: 3 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 2 +- RAM per node: 4015668Ki +- Max pods per node: 110 +- Zone: europe-west2-a +- Instance Type: e2-medium + +## Summary: + +- Still a lot of non-2xx or 3xx responses, but vastly improved on the last test run. +- This indicates that while most of the Agent - control plane connection issues have been resolved, some issues remain. +- All the observed 502s happened within the one window of time, which at least indicates the system was able to recover - although it is unclear what triggered Agent +- The increase in memory usage for NGF seen in the previous test run appears to have been resolved. +- We observe a steady increase in NGINX memory usage over time which could indicate a memory leak. +- CPU usage remained consistent with past results. +- Errors seem to be related to cluster upgrade or some other external factor (excluding the resolved inferences pool status error). + +## Traffic + +HTTP: + +```text +Running 5760m test @ http://cafe.example.com/coffee + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 202.19ms 150.51ms 2.00s 83.62% + Req/Sec 272.67 178.26 2.59k 63.98% + 183598293 requests in 5760.00m, 62.80GB read + Socket errors: connect 0, read 338604, write 82770, timeout 57938 + Non-2xx or 3xx responses: 33893 +Requests/sec: 531.24 +Transfer/sec: 190.54KB +``` + +HTTPS: + +```text +Running 5760m test @ https://cafe.example.com/tea + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 189.21ms 108.25ms 2.00s 66.82% + Req/Sec 271.64 178.03 1.96k 63.33% + 182905321 requests in 5760.00m, 61.55GB read + Socket errors: connect 10168, read 332301, write 0, timeout 96 +Requests/sec: 529.24 +Transfer/sec: 186.76KB +``` + +## Key Metrics + +### Containers memory + +![oss-memory.png](oss-memory.png) + +### Containers CPU + +![oss-cpu.png](oss-cpu.png) + +## Error Logs + +### nginx-gateway + +- msg: Config apply failed, rolling back config; error: error getting file data for name:"/etc/nginx/conf.d/http.conf" hash:"Luqynx2dkxqzXH21wmiV0nj5bHyGiIq7/2gOoM6aKew=" permissions:"0644" size:5430: rpc error: code = NotFound desc = file not found -> happened twice in the 4 days, related to agent reconciliation during token rotation + - {hashFound: jmeyy1p+6W1icH2x2YGYffH1XtooWxvizqUVd+WdzQ4=, hashWanted: Luqynx2dkxqzXH21wmiV0nj5bHyGiIq7/2gOoM6aKew=, level: debug, logger: nginxUpdater.fileService, msg: File found had wrong hash, ts: 2025-10-18T18:11:24Z} + - The error indicates Agent requested a file that had since changed + +- msg: Failed to update lock optimistically: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ngf-longevity-nginx-gateway-fabric-leader-election), falling back to slow path -> same leader election error as on plus, seems out of scope of our product + +- msg: no matches for kind "InferencePool" in version "inference.networking.k8s.io/v1" -> Thousands of these, but fixed in PR 4104 + +### nginx + +Traffic: nearly 34000 502s + +- These all happened in the same window of less than a minute (approx 2025-10-18T18:11:11 - 2025-10-18T18:11:50), and resolved once NGINX restarted +- It's unclear what triggered NGINX to restart, though it does appear a memory spike was observed around this time +- The outage correlates with the config apply error seen in the control plane logs diff --git a/tests/results/longevity/2.2.0/2.2.0-plus.md b/tests/results/longevity/2.2.0/2.2.0-plus.md new file mode 100644 index 0000000000..e2bcd6dda1 --- /dev/null +++ b/tests/results/longevity/2.2.0/2.2.0-plus.md @@ -0,0 +1,96 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: e4eed2dad213387e6493e76100d285483ccbf261 +- Date: 2025-10-17T14:41:02Z +- Dirty: false + +GKE Cluster: + +- Node count: 3 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 2 +- RAM per node: 4015668Ki +- Max pods per node: 110 +- Zone: europe-west2-a +- Instance Type: e2-medium + +## Summary: + +- Total of 5 502s observed across the 4 days of the test run +- The increase in memory usage for NGF seen in the previous test run appears to have resolved. +- We observe a steady increase in NGINX memory usage over time which could indicate a memory leak. +- CPU usage remained consistant with past results. +- Errors seem to be related to cluster upgrade or some other external factor (excluding the resolved inferences pool status error). + +## Key Metrics + +### Containers memory + +![plus-memory.png](oss-memory.png) + +### Containers CPU + +![plus-cpu.png](oss-cpu.png) + +## Traffic + +HTTP: + +```text +Running 5760m test @ http://cafe.example.com/coffee + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 203.71ms 108.67ms 2.00s 66.92% + Req/Sec 257.95 167.36 1.44k 63.57% + 173901014 requests in 5760.00m, 59.64GB read + Socket errors: connect 0, read 219, write 55133, timeout 27 + Non-2xx or 3xx responses: 4 +Requests/sec: 503.19 +Transfer/sec: 180.96KB +``` + +HTTPS: + +```text +Running 5760m test @ https://cafe.example.com/tea + 2 threads and 100 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 203.89ms 108.72ms 1.89s 66.92% + Req/Sec 257.52 167.02 1.85k 63.64% + 173632748 requests in 5760.00m, 58.61GB read + Socket errors: connect 7206, read 113, write 0, timeout 0 + Non-2xx or 3xx responses: 1 +Requests/sec: 502.41 +Transfer/sec: 177.84KB +``` + + +## Error Logs + +### nginx-gateway + +msg: Failed to update lock optimistically: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ngf-longevity-nginx-gateway-fabric-leader-election), falling back to slow path -> same leader election error as on oss, seems out of scope of our product + +msg: Get "https://34.118.224.1:443/apis/gateway.networking.k8s.io/v1beta1/referencegrants?allowWatchBookmarks=true&resourceVersion=1760806842166968999&timeout=10s&timeoutSeconds=435&watch=true": context canceled -> possible cluster upgrade? + +msg: no matches for kind "InferencePool" in version "inference.networking.k8s.io/v1" -> Thousands of these, but fixed in PR 4104 + +### nginx + +Traffic: 5 502s + +``` +INFO 2025-10-19T00:12:04.220541710Z [resource.labels.containerName: nginx] 10.154.15.240 - - [19/Oct/2025:00:12:04 +0000] "GET /coffee HTTP/1.1" 502 150 "-" "-" +INFO 2025-10-19T18:38:18.651520548Z [resource.labels.containerName: nginx] 10.154.15.240 - - [19/Oct/2025:18:38:18 +0000] "GET /coffee HTTP/1.1" 502 150 "-" "-" +INFO 2025-10-20T21:49:05.008076073Z [resource.labels.containerName: nginx] 10.154.15.240 - - [20/Oct/2025:21:49:04 +0000] "GET /tea HTTP/1.1" 502 150 "-" "-" +INFO 2025-10-21T06:43:10.256327990Z [resource.labels.containerName: nginx] 10.154.15.240 - - [21/Oct/2025:06:43:10 +0000] "GET /coffee HTTP/1.1" 502 150 "-" "-" +INFO 2025-10-21T12:13:05.747098022Z [resource.labels.containerName: nginx] 10.154.15.240 - - [21/Oct/2025:12:13:05 +0000] "GET /coffee HTTP/1.1" 502 150 "-" "-" +``` + +No other errors identified in this test run. diff --git a/tests/results/longevity/2.2.0/oss-cpu.png b/tests/results/longevity/2.2.0/oss-cpu.png new file mode 100644 index 0000000000..afce8da49d Binary files /dev/null and b/tests/results/longevity/2.2.0/oss-cpu.png differ diff --git a/tests/results/longevity/2.2.0/oss-memory.png b/tests/results/longevity/2.2.0/oss-memory.png new file mode 100644 index 0000000000..8027cb2757 Binary files /dev/null and b/tests/results/longevity/2.2.0/oss-memory.png differ diff --git a/tests/results/longevity/2.2.0/plus-cpu.png b/tests/results/longevity/2.2.0/plus-cpu.png new file mode 100644 index 0000000000..1fb924290a Binary files /dev/null and b/tests/results/longevity/2.2.0/plus-cpu.png differ diff --git a/tests/results/longevity/2.2.0/plus-memory.png b/tests/results/longevity/2.2.0/plus-memory.png new file mode 100644 index 0000000000..5c39a21dd6 Binary files /dev/null and b/tests/results/longevity/2.2.0/plus-memory.png differ diff --git a/tests/results/ngf-upgrade/2.2.0/2.2.0-oss.md b/tests/results/ngf-upgrade/2.2.0/2.2.0-oss.md new file mode 100644 index 0000000000..a326a21c73 --- /dev/null +++ b/tests/results/ngf-upgrade/2.2.0/2.2.0-oss.md @@ -0,0 +1,62 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 9fbef714ea22a35c4f1a8c97bd5b4e406ae0c1e9 +- Date: 2025-10-21T10:57:37Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 16 +- RAM per node: 65851524Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- The 2.2.0 release shows massive improvements in upgrade behavior: +- 2.1.0 Issue: The summary noted significant downtime during upgrades, with a manual uninstall/reinstall workaround recommended +- 2.2.0 Fix: The new readiness probe (mentioned in 2.1.0 summary as a planned fix) appears to have successfully resolved the upgrade downtime issue +- Remaining Failures: The 11 connection refused errors in 2.2.0 (0.18% failure rate) likely represent the minimal unavoidable disruption during pod replacement +- 99.82% success rate during live upgrade is a production-acceptable result +- System maintains near-normal throughput during upgrades + +## Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 6000, 100.01, 99.83 +Duration [total, attack, wait] 59.995s, 59.992s, 2.616ms +Latencies [min, mean, 50, 90, 95, 99, max] 568.436µs, 579.689ms, 1.075ms, 2.351s, 5.311s, 7.657s, 8.224s +Bytes In [total, mean] 958240, 159.71 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.82% +Status Codes [code:count] 0:11 200:5989 +Error Set: +Get "http://cafe.example.com/coffee": dial tcp 0.0.0.0:0->10.138.0.101:80: connect: connection refused +``` + +![http-oss.png](http-oss.png) + +## Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 6000, 100.01, 99.83 +Duration [total, attack, wait] 59.995s, 59.992s, 2.394ms +Latencies [min, mean, 50, 90, 95, 99, max] 568.59µs, 586.782ms, 1.112ms, 2.315s, 5.356s, 7.672s, 8.229s +Bytes In [total, mean] 924268, 154.04 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.82% +Status Codes [code:count] 0:11 200:5989 +Error Set: +Get "https://cafe.example.com/tea": dial tcp 0.0.0.0:0->10.138.0.101:443: connect: connection refused +``` + +![https-oss.png](https-oss.png) diff --git a/tests/results/ngf-upgrade/2.2.0/2.2.0-plus.md b/tests/results/ngf-upgrade/2.2.0/2.2.0-plus.md new file mode 100644 index 0000000000..d782450f60 --- /dev/null +++ b/tests/results/ngf-upgrade/2.2.0/2.2.0-plus.md @@ -0,0 +1,68 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 9fbef714ea22a35c4f1a8c97bd5b4e406ae0c1e9 +- Date: 2025-10-21T10:57:37Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 16 +- RAM per node: 65851524Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- The 2.2.0 release shows massive improvements in upgrade behavior: +- 2.1.0 Issue: The summary noted significant downtime during upgrades, with a manual uninstall/reinstall workaround recommended +- 2.2.0 Fix: The new readiness probe (mentioned in 2.1.0 summary as a planned fix) appears to have successfully resolved the upgrade downtime issue +- Remaining Failures: The 19 connection refused errors in 2.2.0 (0.32% failure rate) likely represent the minimal unavoidable disruption during pod replacement +- 99.68% success rate during live upgrade is a production-acceptable result +- System maintains near-normal throughput during upgrades + +## Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 6000, 100.01, 99.69 +Duration [total, attack, wait] 59.994s, 59.992s, 2ms +Latencies [min, mean, 50, 90, 95, 99, max] 652.591µs, 460.812ms, 1.096ms, 1.49s, 4.421s, 6.756s, 7.315s +Bytes In [total, mean] 946988, 157.83 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.68% +Status Codes [code:count] 0:19 200:5981 +Error Set: +Get "http://cafe.example.com/coffee": read tcp 10.138.0.107:48757->10.138.0.108:80: read: connection reset by peer +Get "http://cafe.example.com/coffee": read tcp 10.138.0.107:36243->10.138.0.108:80: read: connection reset by peer +Get "http://cafe.example.com/coffee": read tcp 10.138.0.107:34647->10.138.0.108:80: read: connection reset by peer +Get "http://cafe.example.com/coffee": dial tcp 0.0.0.0:0->10.138.0.108:80: connect: connection refused +``` + +![http-plus.png](http-plus.png) + +## Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 6000, 100.01, 99.69 +Duration [total, attack, wait] 59.994s, 59.992s, 1.92ms +Latencies [min, mean, 50, 90, 95, 99, max] 635.09µs, 470.47ms, 1.133ms, 1.533s, 4.46s, 6.804s, 7.35s +Bytes In [total, mean] 911101, 151.85 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 99.68% +Status Codes [code:count] 0:19 200:5981 +Error Set: +Get "https://cafe.example.com/tea": read tcp 10.138.0.107:44145->10.138.0.108:443: read: connection reset by peer +Get "https://cafe.example.com/tea": read tcp 10.138.0.107:45323->10.138.0.108:443: read: connection reset by peer +Get "https://cafe.example.com/tea": read tcp 10.138.0.107:44743->10.138.0.108:443: read: connection reset by peer +Get "https://cafe.example.com/tea": dial tcp 0.0.0.0:0->10.138.0.108:443: connect: connection refused +``` + +![https-plus.png](https-plus.png) diff --git a/tests/results/ngf-upgrade/2.2.0/http-oss.png b/tests/results/ngf-upgrade/2.2.0/http-oss.png new file mode 100644 index 0000000000..18d679aca3 Binary files /dev/null and b/tests/results/ngf-upgrade/2.2.0/http-oss.png differ diff --git a/tests/results/ngf-upgrade/2.2.0/http-plus.png b/tests/results/ngf-upgrade/2.2.0/http-plus.png new file mode 100644 index 0000000000..9966eaf630 Binary files /dev/null and b/tests/results/ngf-upgrade/2.2.0/http-plus.png differ diff --git a/tests/results/ngf-upgrade/2.2.0/https-oss.png b/tests/results/ngf-upgrade/2.2.0/https-oss.png new file mode 100644 index 0000000000..18d679aca3 Binary files /dev/null and b/tests/results/ngf-upgrade/2.2.0/https-oss.png differ diff --git a/tests/results/ngf-upgrade/2.2.0/https-plus.png b/tests/results/ngf-upgrade/2.2.0/https-plus.png new file mode 100644 index 0000000000..9966eaf630 Binary files /dev/null and b/tests/results/ngf-upgrade/2.2.0/https-plus.png differ diff --git a/tests/results/reconfig/2.2.0/2.2.0-oss.md b/tests/results/reconfig/2.2.0/2.2.0-oss.md new file mode 100644 index 0000000000..b67a3979f2 --- /dev/null +++ b/tests/results/reconfig/2.2.0/2.2.0-oss.md @@ -0,0 +1,161 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 9fbef714ea22a35c4f1a8c97bd5b4e406ae0c1e9 +- Date: 2025-10-21T10:57:37Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 16 +- RAM per node: 65851524Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- 2.2.0 shows meaningful improvements in configuration reliability with ~46% fewer errors, though at the cost of slightly more event processing overhead for dynamic resource creation. + +### Key Findings +- Test 1 Improvements (Pre-existing Resources): + - Faster time to ready for both 30 and 150 resources + - No configuration errors in either version + - Stable event batch processing +- Test 2 Mixed Results (Dynamic Resources): + - Slight time increase for 30 resources (+1s) + - Nearly identical time for 150 resources (-1s) + - More event batches in 2.2.0 (15-8% increase) + - Slower average processing (+63.6% for 30, +5.9% for 150) + - Significantly fewer errors (-33% for 30, -46% for 150) +- Configuration Error Improvements: + - 46% reduction in NGINX errors for 150 resources + - No duplicate upstream errors in 2.2.0 + - Cleaner error pattern (only EOF and pread issues) + - Jumbled configuration issue still present but reduced + +### Positive Changes: +- Better handling of pre-existing resources (faster startup) +- Significantly fewer configuration errors during dynamic resource creation +- Eliminated certain error types (invalid zone directive, duplicate upstream) + +### Concerns: +- Increased event batch count suggests more reconciliation loops +- Slower average processing time for dynamic resources +- Jumbled configuration issue seen in 2.1.0 still exists but is less severe + +## Test 1: Resources exist before startup - NumResources 30 + +### Time to Ready + +Time To Ready Description: From when NGF starts to when the NGINX configuration is fully configured +- TimeToReadyTotal: 14s + +### Event Batch Processing + +- Event Batch Total: 10 +- Event Batch Processing Average Time: 3ms +- Event Batch Processing distribution: + - 500.0ms: 10 + - 1000.0ms: 10 + - 5000.0ms: 10 + - 10000.0ms: 10 + - 30000.0ms: 10 + - +Infms: 10 + +### NGINX Error Logs + +## Test 1: Resources exist before startup - NumResources 150 + +### Time to Ready + +Time To Ready Description: From when NGF starts to when the NGINX configuration is fully configured +- TimeToReadyTotal: 19s + +### Event Batch Processing + +- Event Batch Total: 9 +- Event Batch Processing Average Time: 9ms +- Event Batch Processing distribution: + - 500.0ms: 9 + - 1000.0ms: 9 + - 5000.0ms: 9 + - 10000.0ms: 9 + - 30000.0ms: 9 + - +Infms: 9 + +### NGINX Error Logs + +## Test 2: Start NGF, deploy Gateway, wait until NGINX agent instance connects to NGF, create many resources attached to GW - NumResources 30 + +### Time to Ready + +Time To Ready Description: From when NGINX receives the first configuration created by NGF to when the NGINX configuration is fully configured +- TimeToReadyTotal: 25s + +### Event Batch Processing + +- Event Batch Total: 356 +- Event Batch Processing Average Time: 18ms +- Event Batch Processing distribution: + - 500.0ms: 355 + - 1000.0ms: 356 + - 5000.0ms: 356 + - 10000.0ms: 356 + - 30000.0ms: 356 + - +Infms: 356 + +### NGINX Error Logs +2025/10/21 15:57:24 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:282 +2025/10/21 15:57:24 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:484 +2025/10/21 15:57:29 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:3349 +2025/10/21 15:57:30 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:3488 + +## Test 2: Start NGF, deploy Gateway, wait until NGINX agent instance connects to NGF, create many resources attached to GW - NumResources 150 + +### Time to Ready + +Time To Ready Description: From when NGINX receives the first configuration created by NGF to when the NGINX configuration is fully configured +- TimeToReadyTotal: 127s + +### Event Batch Processing + +- Event Batch Total: 1575 +- Event Batch Processing Average Time: 18ms +- Event Batch Processing distribution: + - 500.0ms: 1572 + - 1000.0ms: 1575 + - 5000.0ms: 1575 + - 10000.0ms: 1575 + - 30000.0ms: 1575 + - +Infms: 1575 + +### NGINX Error Logs +2025/10/21 16:01:53 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:2872 +2025/10/21 16:01:58 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:5099 +2025/10/21 16:01:59 [emerg] 8#8: pread() returned only 0 bytes instead of 4095 in /etc/nginx/conf.d/http.conf:2071 +2025/10/21 16:02:01 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:6884 +2025/10/21 16:02:02 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:7122 +2025/10/21 16:02:03 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:8346 +2025/10/21 16:02:04 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:8507 +2025/10/21 16:02:04 [emerg] 8#8: pread() returned only 0 bytes instead of 4085 in /etc/nginx/conf.d/http.conf:5599 +2025/10/21 16:02:05 [emerg] 8#8: pread() returned only 0 bytes instead of 4095 in /etc/nginx/conf.d/http.conf:6258 +2025/10/21 16:02:07 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:10627 +2025/10/21 16:02:09 [emerg] 8#8: pread() returned only 0 bytes instead of 4093 in /etc/nginx/conf.d/http.conf:5594 +2025/10/21 16:02:10 [emerg] 8#8: pread() returned only 0 bytes instead of 4087 in /etc/nginx/conf.d/http.conf:5758 +2025/10/21 16:02:11 [emerg] 8#8: pread() returned only 0 bytes instead of 4095 in /etc/nginx/conf.d/http.conf:4606 +2025/10/21 16:02:12 [emerg] 8#8: unexpected end of file, expecting "}" in /etc/nginx/conf.d/http.conf:2823 +2025/10/21 16:02:12 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:7890 +2025/10/21 16:02:14 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:15203 +2025/10/21 16:02:15 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:15528 +2025/10/21 16:02:15 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:15646 +2025/10/21 16:02:16 [emerg] 8#8: pread() returned only 0 bytes instead of 4092 in /etc/nginx/conf.d/http.conf:7977 +2025/10/21 16:02:18 [emerg] 8#8: pread() returned only 0 bytes instead of 4095 in /etc/nginx/conf.d/http.conf:16349 +2025/10/21 16:02:18 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:17767 diff --git a/tests/results/reconfig/2.2.0/2.2.0-plus.md b/tests/results/reconfig/2.2.0/2.2.0-plus.md new file mode 100644 index 0000000000..19552d514c --- /dev/null +++ b/tests/results/reconfig/2.2.0/2.2.0-plus.md @@ -0,0 +1,109 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 9fbef714ea22a35c4f1a8c97bd5b4e406ae0c1e9 +- Date: 2025-10-21T10:57:37Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 16 +- RAM per node: 65851524Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- 2.2.0 shows across-the-board improvements in reconfiguration performance compared to 2.1.0 with no configuration errors + +## Test 1: Resources exist before startup - NumResources 30 + +### Time to Ready + +Time To Ready Description: From when NGF starts to when the NGINX configuration is fully configured +- TimeToReadyTotal: 16s + +### Event Batch Processing + +- Event Batch Total: 8 +- Event Batch Processing Average Time: 18ms +- Event Batch Processing distribution: + - 500.0ms: 8 + - 1000.0ms: 8 + - 5000.0ms: 8 + - 10000.0ms: 8 + - 30000.0ms: 8 + - +Infms: 8 + +### NGINX Error Logs + +## Test 1: Resources exist before startup - NumResources 150 + +### Time to Ready + +Time To Ready Description: From when NGF starts to when the NGINX configuration is fully configured +- TimeToReadyTotal: 24s + +### Event Batch Processing + +- Event Batch Total: 8 +- Event Batch Processing Average Time: 36ms +- Event Batch Processing distribution: + - 500.0ms: 8 + - 1000.0ms: 8 + - 5000.0ms: 8 + - 10000.0ms: 8 + - 30000.0ms: 8 + - +Infms: 8 + +### NGINX Error Logs + +## Test 2: Start NGF, deploy Gateway, wait until NGINX agent instance connects to NGF, create many resources attached to GW - NumResources 30 + +### Time to Ready + +Time To Ready Description: From when NGINX receives the first configuration created by NGF to when the NGINX configuration is fully configured +- TimeToReadyTotal: 23s + +### Event Batch Processing + +- Event Batch Total: 254 +- Event Batch Processing Average Time: 34ms +- Event Batch Processing distribution: + - 500.0ms: 244 + - 1000.0ms: 252 + - 5000.0ms: 254 + - 10000.0ms: 254 + - 30000.0ms: 254 + - +Infms: 254 + +### NGINX Error Logs + +## Test 2: Start NGF, deploy Gateway, wait until NGINX agent instance connects to NGF, create many resources attached to GW - NumResources 150 + +### Time to Ready + +Time To Ready Description: From when NGINX receives the first configuration created by NGF to when the NGINX configuration is fully configured +- TimeToReadyTotal: 123s + +### Event Batch Processing + +- Event Batch Total: 1281 +- Event Batch Processing Average Time: 26ms +- Event Batch Processing distribution: + - 500.0ms: 1252 + - 1000.0ms: 1268 + - 5000.0ms: 1281 + - 10000.0ms: 1281 + - 30000.0ms: 1281 + - +Infms: 1281 + +### NGINX Error Logs diff --git a/tests/results/scale/2.2.0/2.2.0-oss.md b/tests/results/scale/2.2.0/2.2.0-oss.md new file mode 100644 index 0000000000..243b234aac --- /dev/null +++ b/tests/results/scale/2.2.0/2.2.0-oss.md @@ -0,0 +1,159 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 9fbef714ea22a35c4f1a8c97bd5b4e406ae0c1e9 +- Date: 2025-10-21T10:57:37Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 16 +- RAM per node: 65851524Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +2.2.0 shows a trade-off pattern: + +- Reliability improved significantly: 80-90% fewer NGF errors in listener tests +- Performance degraded: Processing times increased 60-160% across most tests +- Latency impact: 2-12% increases in HTTP match latency +- New error types: NGINX errors appeared where there were none before +- In 2.1.0 we saw "tests which previously errored saw number of errors increase" - this has been reversed in 2.2.0, with dramatic error reductions. +- However, this appears to come at a performance cost, particularly in event batch processing time. + +## Test TestScale_Listeners + +### Event Batch Processing + +- Total: 258 +- Average Time: 13ms +- Event Batch Processing distribution: + - 500.0ms: 257 + - 1000.0ms: 258 + - 5000.0ms: 258 + - 10000.0ms: 258 + - 30000.0ms: 258 + - +Infms: 258 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 5 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_Listeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPSListeners + +### Event Batch Processing + +- Total: 283 +- Average Time: 12ms +- Event Batch Processing distribution: + - 500.0ms: 281 + - 1000.0ms: 283 + - 5000.0ms: 283 + - 10000.0ms: 283 + - 30000.0ms: 283 + - +Infms: 283 + +### Errors + +- NGF errors: 4 +- NGF container restarts: 0 +- NGINX errors: 2 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPSListeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPRoutes + +### Event Batch Processing + +- Total: 1009 +- Average Time: 147ms +- Event Batch Processing distribution: + - 500.0ms: 956 + - 1000.0ms: 1009 + - 5000.0ms: 1009 + - 10000.0ms: 1009 + - 30000.0ms: 1009 + - +Infms: 1009 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPRoutes) for more details. +The logs are attached only if there are errors. + +## Test TestScale_UpstreamServers + +### Event Batch Processing + +- Total: 106 +- Average Time: 179ms +- Event Batch Processing distribution: + - 500.0ms: 92 + - 1000.0ms: 106 + - 5000.0ms: 106 + - 10000.0ms: 106 + - 30000.0ms: 106 + - +Infms: 106 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_UpstreamServers) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPMatches + +```text +Requests [total, rate, throughput] 29999, 1000.01, 999.97 +Duration [total, attack, wait] 30s, 29.999s, 1.019ms +Latencies [min, mean, 50, 90, 95, 99, max] 707.321µs, 1.025ms, 976.47µs, 1.14ms, 1.206ms, 1.426ms, 26.344ms +Bytes In [total, mean] 4799840, 160.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:29999 +Error Set: +``` +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 1.024ms +Latencies [min, mean, 50, 90, 95, 99, max] 812.293µs, 1.079ms, 1.049ms, 1.214ms, 1.295ms, 1.474ms, 13.759ms +Bytes In [total, mean] 4800000, 160.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/scale/2.2.0/2.2.0-plus.md b/tests/results/scale/2.2.0/2.2.0-plus.md new file mode 100644 index 0000000000..fd914c1e90 --- /dev/null +++ b/tests/results/scale/2.2.0/2.2.0-plus.md @@ -0,0 +1,158 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 9fbef714ea22a35c4f1a8c97bd5b4e406ae0c1e9 +- Date: 2025-10-21T10:57:37Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 16 +- RAM per node: 65851524Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## Summary: + +- Compared with 2.1.0, 2.2.0 achieves near-perfect reliability in scale tests (eliminating 348 errors) but at the cost of notably higher request latency, particularly in the HTTP matching tests where latency increased by approximately 300µs on average. +- 333 total NGINX errors eliminated across listener tests +- 15 NGF errors eliminated (21 → 6 total) +- Listener processing 5-6x faster with far fewer errors +- Clean test results for most tests (zero errors) +- 30-44% increase in HTTP match latency - much more pronounced than OSS (which saw 2-8% increases) +- Processing time increases for HTTPRoutes and UpstreamServers tests + +## Test TestScale_Listeners + +### Event Batch Processing + +- Total: 206 +- Average Time: 23ms +- Event Batch Processing distribution: + - 500.0ms: 201 + - 1000.0ms: 206 + - 5000.0ms: 206 + - 10000.0ms: 206 + - 30000.0ms: 206 + - +Infms: 206 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_Listeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPSListeners + +### Event Batch Processing + +- Total: 266 +- Average Time: 16ms +- Event Batch Processing distribution: + - 500.0ms: 261 + - 1000.0ms: 265 + - 5000.0ms: 266 + - 10000.0ms: 266 + - 30000.0ms: 266 + - +Infms: 266 + +### Errors + +- NGF errors: 2 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPSListeners) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPRoutes + +### Event Batch Processing + +- Total: 1010 +- Average Time: 196ms +- Event Batch Processing distribution: + - 500.0ms: 964 + - 1000.0ms: 1007 + - 5000.0ms: 1010 + - 10000.0ms: 1010 + - 30000.0ms: 1010 + - +Infms: 1010 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_HTTPRoutes) for more details. +The logs are attached only if there are errors. + +## Test TestScale_UpstreamServers + +### Event Batch Processing + +- Total: 55 +- Average Time: 485ms +- Event Batch Processing distribution: + - 500.0ms: 24 + - 1000.0ms: 53 + - 5000.0ms: 55 + - 10000.0ms: 55 + - 30000.0ms: 55 + - +Infms: 55 + +### Errors + +- NGF errors: 0 +- NGF container restarts: 0 +- NGINX errors: 0 +- NGINX container restarts: 0 + +### Graphs and Logs + +See [output directory](./TestScale_UpstreamServers) for more details. +The logs are attached only if there are errors. + +## Test TestScale_HTTPMatches + +```text +Requests [total, rate, throughput] 30000, 1000.04, 1000.00 +Duration [total, attack, wait] 30s, 29.999s, 1.061ms +Latencies [min, mean, 50, 90, 95, 99, max] 746.934µs, 1.024ms, 991.917µs, 1.145ms, 1.211ms, 1.389ms, 33.726ms +Bytes In [total, mean] 4830000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` +```text +Requests [total, rate, throughput] 30000, 1000.03, 999.99 +Duration [total, attack, wait] 30s, 29.999s, 1.022ms +Latencies [min, mean, 50, 90, 95, 99, max] 847.583µs, 1.117ms, 1.081ms, 1.237ms, 1.313ms, 1.55ms, 22.064ms +Bytes In [total, mean] 4830000, 161.00 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` diff --git a/tests/results/scale/2.2.0/TestScale_HTTPRoutes/cpu-oss.png b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/cpu-oss.png new file mode 100644 index 0000000000..81911f1ddc Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/cpu-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPRoutes/cpu-plus.png b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/cpu-plus.png new file mode 100644 index 0000000000..9024422528 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/cpu-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPRoutes/memory-oss.png b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/memory-oss.png new file mode 100644 index 0000000000..cd369097b6 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/memory-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPRoutes/memory-plus.png b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/memory-plus.png new file mode 100644 index 0000000000..5039326343 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/memory-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPRoutes/ngf-oss.log b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/ngf-oss.log new file mode 100644 index 0000000000..c576e0c5bc --- /dev/null +++ b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/ngf-oss.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-10-21T14:35:42Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} +{"level":"debug","ts":"2025-10-21T14:35:42Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/2.2.0/TestScale_HTTPRoutes/ttr-oss.png b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/ttr-oss.png new file mode 100644 index 0000000000..6b0e3fcf88 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/ttr-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPRoutes/ttr-plus.png b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/ttr-plus.png new file mode 100644 index 0000000000..5bfb06a97a Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPRoutes/ttr-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPSListeners/cpu-oss.png b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/cpu-oss.png new file mode 100644 index 0000000000..622884195d Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/cpu-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPSListeners/cpu-plus.png b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/cpu-plus.png new file mode 100644 index 0000000000..4efada1e53 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/cpu-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPSListeners/memory-oss.png b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/memory-oss.png new file mode 100644 index 0000000000..2d255b10bc Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/memory-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPSListeners/memory-plus.png b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/memory-plus.png new file mode 100644 index 0000000000..97e36755aa Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/memory-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ngf-oss.log b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ngf-oss.log new file mode 100644 index 0000000000..371b1b06ef --- /dev/null +++ b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ngf-oss.log @@ -0,0 +1,4 @@ +{"level":"debug","ts":"2025-10-21T14:26:29Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} +{"level":"debug","ts":"2025-10-21T14:26:53Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on httproutes.gateway.networking.k8s.io \"route-62\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"route-62","kind":"HTTPRoute"} +{"level":"debug","ts":"2025-10-21T14:26:54Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on httproutes.gateway.networking.k8s.io \"route-62\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"route-62","kind":"HTTPRoute"} +{"level":"debug","ts":"2025-10-21T14:26:55Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ngf-plus.log b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ngf-plus.log new file mode 100644 index 0000000000..ccdc65ac8f --- /dev/null +++ b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ngf-plus.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-10-21T15:48:19Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} +{"level":"debug","ts":"2025-10-21T15:48:38Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/2.2.0/TestScale_HTTPSListeners/nginx-oss.log b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/nginx-oss.log new file mode 100644 index 0000000000..7b3388dff8 --- /dev/null +++ b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/nginx-oss.log @@ -0,0 +1,2 @@ +2025/10/21 14:26:51 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:2671 +2025/10/21 14:26:52 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:2878 diff --git a/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ttr-oss.png b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ttr-oss.png new file mode 100644 index 0000000000..1a10cbbf18 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ttr-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ttr-plus.png b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ttr-plus.png new file mode 100644 index 0000000000..1f99ca9dc3 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_HTTPSListeners/ttr-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_Listeners/cpu-oss.png b/tests/results/scale/2.2.0/TestScale_Listeners/cpu-oss.png new file mode 100644 index 0000000000..b21c96a814 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_Listeners/cpu-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_Listeners/cpu-plus.png b/tests/results/scale/2.2.0/TestScale_Listeners/cpu-plus.png new file mode 100644 index 0000000000..eec94982ed Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_Listeners/cpu-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_Listeners/memory-oss.png b/tests/results/scale/2.2.0/TestScale_Listeners/memory-oss.png new file mode 100644 index 0000000000..c36d8d41d3 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_Listeners/memory-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_Listeners/memory-plus.png b/tests/results/scale/2.2.0/TestScale_Listeners/memory-plus.png new file mode 100644 index 0000000000..0421d78ee2 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_Listeners/memory-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_Listeners/ngf-oss.log b/tests/results/scale/2.2.0/TestScale_Listeners/ngf-oss.log new file mode 100644 index 0000000000..30d5b56703 --- /dev/null +++ b/tests/results/scale/2.2.0/TestScale_Listeners/ngf-oss.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-10-21T14:23:14Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} +{"level":"debug","ts":"2025-10-21T14:23:42Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on httproutes.gateway.networking.k8s.io \"route-62\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"route-62","kind":"HTTPRoute"} diff --git a/tests/results/scale/2.2.0/TestScale_Listeners/ngf-plus.log b/tests/results/scale/2.2.0/TestScale_Listeners/ngf-plus.log new file mode 100644 index 0000000000..78bcb36bc8 --- /dev/null +++ b/tests/results/scale/2.2.0/TestScale_Listeners/ngf-plus.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-10-21T15:44:57Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} +{"level":"debug","ts":"2025-10-21T15:45:19Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/scale/2.2.0/TestScale_Listeners/nginx-oss.log b/tests/results/scale/2.2.0/TestScale_Listeners/nginx-oss.log new file mode 100644 index 0000000000..e5b2937f71 --- /dev/null +++ b/tests/results/scale/2.2.0/TestScale_Listeners/nginx-oss.log @@ -0,0 +1,5 @@ +2025/10/21 14:23:37 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:336 +2025/10/21 14:23:38 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:1344 +2025/10/21 14:23:39 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:2058 +2025/10/21 14:23:40 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:2478 +2025/10/21 14:23:40 [emerg] 8#8: unexpected end of file, expecting ";" or "}" in /etc/nginx/conf.d/http.conf:2604 diff --git a/tests/results/scale/2.2.0/TestScale_Listeners/ttr-oss.png b/tests/results/scale/2.2.0/TestScale_Listeners/ttr-oss.png new file mode 100644 index 0000000000..ec11888af6 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_Listeners/ttr-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_Listeners/ttr-plus.png b/tests/results/scale/2.2.0/TestScale_Listeners/ttr-plus.png new file mode 100644 index 0000000000..e26d17a3c2 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_Listeners/ttr-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_UpstreamServers/cpu-oss.png b/tests/results/scale/2.2.0/TestScale_UpstreamServers/cpu-oss.png new file mode 100644 index 0000000000..3e9d336f4a Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_UpstreamServers/cpu-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_UpstreamServers/cpu-plus.png b/tests/results/scale/2.2.0/TestScale_UpstreamServers/cpu-plus.png new file mode 100644 index 0000000000..f201c211ed Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_UpstreamServers/cpu-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_UpstreamServers/memory-oss.png b/tests/results/scale/2.2.0/TestScale_UpstreamServers/memory-oss.png new file mode 100644 index 0000000000..9d6a045f26 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_UpstreamServers/memory-oss.png differ diff --git a/tests/results/scale/2.2.0/TestScale_UpstreamServers/memory-plus.png b/tests/results/scale/2.2.0/TestScale_UpstreamServers/memory-plus.png new file mode 100644 index 0000000000..a3244ad713 Binary files /dev/null and b/tests/results/scale/2.2.0/TestScale_UpstreamServers/memory-plus.png differ diff --git a/tests/results/scale/2.2.0/TestScale_UpstreamServers/ngf-oss.log b/tests/results/scale/2.2.0/TestScale_UpstreamServers/ngf-oss.log new file mode 100644 index 0000000000..e92153c666 --- /dev/null +++ b/tests/results/scale/2.2.0/TestScale_UpstreamServers/ngf-oss.log @@ -0,0 +1,2 @@ +{"level":"debug","ts":"2025-10-21T14:38:30Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gatewayclasses.gateway.networking.k8s.io \"nginx\": the object has been modified; please apply your changes to the latest version and try again","namespace":"","name":"nginx","kind":"GatewayClass"} +{"level":"debug","ts":"2025-10-21T14:39:06Z","logger":"statusUpdater","msg":"Encountered error updating status","error":"Operation cannot be fulfilled on gateways.gateway.networking.k8s.io \"gateway\": the object has been modified; please apply your changes to the latest version and try again","namespace":"scale","name":"gateway","kind":"Gateway"} diff --git a/tests/results/zero-downtime-scale/2.2.0/2.2.0-oss.md b/tests/results/zero-downtime-scale/2.2.0/2.2.0-oss.md new file mode 100644 index 0000000000..6d831cf84c --- /dev/null +++ b/tests/results/zero-downtime-scale/2.2.0/2.2.0-oss.md @@ -0,0 +1,281 @@ +# Results + +## Test environment + +NGINX Plus: false + +NGINX Gateway Fabric: + +- Commit: 9fbef714ea22a35c4f1a8c97bd5b4e406ae0c1e9 +- Date: 2025-10-21T10:57:37Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 16 +- RAM per node: 65851524Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## One NGINX Pod runs per node Test Results + +### Scale Up Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.023ms +Latencies [min, mean, 50, 90, 95, 99, max] 592.096µs, 1.141ms, 1.121ms, 1.36ms, 1.437ms, 1.764ms, 13.043ms +Bytes In [total, mean] 4806070, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-http-oss.png](gradual-scale-up-affinity-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.046ms +Latencies [min, mean, 50, 90, 95, 99, max] 623.382µs, 1.237ms, 1.213ms, 1.456ms, 1.539ms, 1.923ms, 13.245ms +Bytes In [total, mean] 4626014, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-https-oss.png](gradual-scale-up-affinity-https-oss.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 737.868µs +Latencies [min, mean, 50, 90, 95, 99, max] 585.662µs, 1.166ms, 1.11ms, 1.35ms, 1.43ms, 1.679ms, 1.036s +Bytes In [total, mean] 7689575, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-http-oss.png](gradual-scale-down-affinity-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 1.169ms +Latencies [min, mean, 50, 90, 95, 99, max] 618.883µs, 1.214ms, 1.186ms, 1.424ms, 1.517ms, 1.811ms, 34.086ms +Bytes In [total, mean] 7401581, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-https-oss.png](gradual-scale-down-affinity-https-oss.png) + +### Scale Up Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.179ms +Latencies [min, mean, 50, 90, 95, 99, max] 629.496µs, 1.154ms, 1.127ms, 1.317ms, 1.386ms, 1.663ms, 59.679ms +Bytes In [total, mean] 1850428, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-https-oss.png](abrupt-scale-up-affinity-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.962ms +Latencies [min, mean, 50, 90, 95, 99, max] 566.08µs, 1.152ms, 1.126ms, 1.375ms, 1.462ms, 1.735ms, 61.502ms +Bytes In [total, mean] 1922394, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-http-oss.png](abrupt-scale-up-affinity-http-oss.png) + +### Scale Down Abruptly + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.288ms +Latencies [min, mean, 50, 90, 95, 99, max] 568.924µs, 1.046ms, 1.044ms, 1.229ms, 1.279ms, 1.452ms, 23.729ms +Bytes In [total, mean] 1922357, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-http-oss.png](abrupt-scale-down-affinity-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.06ms +Latencies [min, mean, 50, 90, 95, 99, max] 660.702µs, 1.149ms, 1.13ms, 1.297ms, 1.36ms, 1.589ms, 23.808ms +Bytes In [total, mean] 1850371, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-https-oss.png](abrupt-scale-down-affinity-https-oss.png) + +## Multiple NGINX Pods run per node Test Results + +### Scale Up Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.166ms +Latencies [min, mean, 50, 90, 95, 99, max] 607.656µs, 1.221ms, 1.162ms, 1.505ms, 1.646ms, 2.139ms, 36.577ms +Bytes In [total, mean] 4805940, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-http-oss.png](gradual-scale-up-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.054ms +Latencies [min, mean, 50, 90, 95, 99, max] 690.539µs, 1.305ms, 1.228ms, 1.574ms, 1.7ms, 2.287ms, 17.947ms +Bytes In [total, mean] 4625838, 154.19 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-https-oss.png](gradual-scale-up-https-oss.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 1.317ms +Latencies [min, mean, 50, 90, 95, 99, max] 594.473µs, 1.112ms, 1.102ms, 1.294ms, 1.365ms, 1.699ms, 41.689ms +Bytes In [total, mean] 15379382, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-http-oss.png](gradual-scale-down-http-oss.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 1.103ms +Latencies [min, mean, 50, 90, 95, 99, max] 640.04µs, 1.188ms, 1.168ms, 1.371ms, 1.453ms, 1.785ms, 41.426ms +Bytes In [total, mean] 14803242, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-https-oss.png](gradual-scale-down-https-oss.png) + +### Scale Up Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.03, 100.03 +Duration [total, attack, wait] 2m0s, 2m0s, 1.134ms +Latencies [min, mean, 50, 90, 95, 99, max] 663.551µs, 1.203ms, 1.169ms, 1.34ms, 1.394ms, 1.541ms, 106.432ms +Bytes In [total, mean] 1850430, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-https-oss.png](abrupt-scale-up-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.03, 100.03 +Duration [total, attack, wait] 2m0s, 2m0s, 1.25ms +Latencies [min, mean, 50, 90, 95, 99, max] 594.934µs, 1.127ms, 1.108ms, 1.283ms, 1.337ms, 1.504ms, 32.039ms +Bytes In [total, mean] 1922429, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-http-oss.png](abrupt-scale-up-http-oss.png) + +### Scale Down Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 837.597µs +Latencies [min, mean, 50, 90, 95, 99, max] 653.918µs, 1.178ms, 1.175ms, 1.346ms, 1.41ms, 1.608ms, 25.063ms +Bytes In [total, mean] 1850359, 154.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-https-oss.png](abrupt-scale-down-https-oss.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.22ms +Latencies [min, mean, 50, 90, 95, 99, max] 585.48µs, 1.082ms, 1.081ms, 1.255ms, 1.311ms, 1.473ms, 25.498ms +Bytes In [total, mean] 1922346, 160.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-http-oss.png](abrupt-scale-down-http-oss.png) diff --git a/tests/results/zero-downtime-scale/2.2.0/2.2.0-plus.md b/tests/results/zero-downtime-scale/2.2.0/2.2.0-plus.md new file mode 100644 index 0000000000..3f7fbacdfb --- /dev/null +++ b/tests/results/zero-downtime-scale/2.2.0/2.2.0-plus.md @@ -0,0 +1,282 @@ +# Results + +## Test environment + +NGINX Plus: true + +NGINX Gateway Fabric: + +- Commit: 9fbef714ea22a35c4f1a8c97bd5b4e406ae0c1e9 +- Date: 2025-10-21T10:57:37Z +- Dirty: false + +GKE Cluster: + +- Node count: 12 +- k8s version: v1.33.5-gke.1080000 +- vCPUs per node: 16 +- RAM per node: 65851524Ki +- Max pods per node: 110 +- Zone: us-west1-b +- Instance Type: n2d-standard-16 + +## One NGINX Pod runs per node Test Results + +### Scale Up Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.426ms +Latencies [min, mean, 50, 90, 95, 99, max] 696.349µs, 1.211ms, 1.2ms, 1.384ms, 1.458ms, 1.744ms, 17.023ms +Bytes In [total, mean] 4776047, 159.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-http-plus.png](gradual-scale-up-affinity-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.894ms +Latencies [min, mean, 50, 90, 95, 99, max] 678.401µs, 1.262ms, 1.246ms, 1.426ms, 1.506ms, 1.781ms, 17.338ms +Bytes In [total, mean] 4595937, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-affinity-https-plus.png](gradual-scale-up-affinity-https-plus.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 1.299ms +Latencies [min, mean, 50, 90, 95, 99, max] 659.541µs, 1.374ms, 1.309ms, 1.655ms, 1.76ms, 1.992ms, 250.434ms +Bytes In [total, mean] 7641307, 159.19 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 0:1 200:47999 +Error Set: +Get "http://cafe.example.com/coffee": dial tcp 0.0.0.0:0->10.138.0.92:80: connect: network is unreachable +``` + +![gradual-scale-down-affinity-http-plus.png](gradual-scale-down-affinity-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 48000, 100.00, 100.00 +Duration [total, attack, wait] 8m0s, 8m0s, 1.301ms +Latencies [min, mean, 50, 90, 95, 99, max] 758.883µs, 1.422ms, 1.343ms, 1.682ms, 1.781ms, 2.052ms, 250.731ms +Bytes In [total, mean] 7353607, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:48000 +Error Set: +``` + +![gradual-scale-down-affinity-https-plus.png](gradual-scale-down-affinity-https-plus.png) + +### Scale Up Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.168ms +Latencies [min, mean, 50, 90, 95, 99, max] 757.428µs, 1.293ms, 1.272ms, 1.441ms, 1.503ms, 1.695ms, 61.896ms +Bytes In [total, mean] 1838384, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-https-plus.png](abrupt-scale-up-affinity-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.297ms +Latencies [min, mean, 50, 90, 95, 99, max] 727.622µs, 1.239ms, 1.227ms, 1.4ms, 1.454ms, 1.648ms, 61.895ms +Bytes In [total, mean] 1910476, 159.21 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-affinity-http-plus.png](abrupt-scale-up-affinity-http-plus.png) + +### Scale Down Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.312ms +Latencies [min, mean, 50, 90, 95, 99, max] 747.968µs, 1.299ms, 1.285ms, 1.459ms, 1.521ms, 1.666ms, 25.442ms +Bytes In [total, mean] 1838381, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-https-plus.png](abrupt-scale-down-affinity-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.465ms +Latencies [min, mean, 50, 90, 95, 99, max] 731.346µs, 1.24ms, 1.235ms, 1.412ms, 1.474ms, 1.635ms, 25.212ms +Bytes In [total, mean] 1910387, 159.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-affinity-http-plus.png](abrupt-scale-down-affinity-http-plus.png) + +## Multiple NGINX Pods run per node Test Results + +### Scale Up Gradually + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 1.263ms +Latencies [min, mean, 50, 90, 95, 99, max] 707.129µs, 1.269ms, 1.258ms, 1.425ms, 1.491ms, 1.866ms, 26.038ms +Bytes In [total, mean] 4596034, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-https-plus.png](gradual-scale-up-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 30000, 100.00, 100.00 +Duration [total, attack, wait] 5m0s, 5m0s, 959.384µs +Latencies [min, mean, 50, 90, 95, 99, max] 657.946µs, 1.222ms, 1.215ms, 1.382ms, 1.443ms, 1.784ms, 23.305ms +Bytes In [total, mean] 4776015, 159.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:30000 +Error Set: +``` + +![gradual-scale-up-http-plus.png](gradual-scale-up-http-plus.png) + +### Scale Down Gradually + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 1.274ms +Latencies [min, mean, 50, 90, 95, 99, max] 681.455µs, 1.213ms, 1.206ms, 1.368ms, 1.425ms, 1.687ms, 43.825ms +Bytes In [total, mean] 15283157, 159.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-http-plus.png](gradual-scale-down-http-plus.png) + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 96000, 100.00, 100.00 +Duration [total, attack, wait] 16m0s, 16m0s, 1.154ms +Latencies [min, mean, 50, 90, 95, 99, max] 706.001µs, 1.242ms, 1.231ms, 1.394ms, 1.454ms, 1.742ms, 66.449ms +Bytes In [total, mean] 14707235, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:96000 +Error Set: +``` + +![gradual-scale-down-https-plus.png](gradual-scale-down-https-plus.png) + +### Scale Up Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.182ms +Latencies [min, mean, 50, 90, 95, 99, max] 757.525µs, 1.293ms, 1.247ms, 1.407ms, 1.464ms, 1.67ms, 117.053ms +Bytes In [total, mean] 1838423, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-https-plus.png](abrupt-scale-up-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.197ms +Latencies [min, mean, 50, 90, 95, 99, max] 716.034µs, 1.253ms, 1.214ms, 1.377ms, 1.432ms, 1.711ms, 117.41ms +Bytes In [total, mean] 1910335, 159.19 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-up-http-plus.png](abrupt-scale-up-http-plus.png) + +### Scale Down Abruptly + +#### Test: Send https /tea traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.277ms +Latencies [min, mean, 50, 90, 95, 99, max] 729.233µs, 1.245ms, 1.233ms, 1.398ms, 1.46ms, 1.702ms, 41.318ms +Bytes In [total, mean] 1838383, 153.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-https-plus.png](abrupt-scale-down-https-plus.png) + +#### Test: Send http /coffee traffic + +```text +Requests [total, rate, throughput] 12000, 100.01, 100.01 +Duration [total, attack, wait] 2m0s, 2m0s, 1.36ms +Latencies [min, mean, 50, 90, 95, 99, max] 719.887µs, 1.201ms, 1.201ms, 1.363ms, 1.422ms, 1.622ms, 34.401ms +Bytes In [total, mean] 1910414, 159.20 +Bytes Out [total, mean] 0, 0.00 +Success [ratio] 100.00% +Status Codes [code:count] 200:12000 +Error Set: +``` + +![abrupt-scale-down-http-plus.png](abrupt-scale-down-http-plus.png) diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-http-oss.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-http-oss.png new file mode 100644 index 0000000000..497ff76691 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-http-plus.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-http-plus.png new file mode 100644 index 0000000000..5f7668d9b9 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-https-oss.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-https-oss.png new file mode 100644 index 0000000000..497ff76691 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-https-plus.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-https-plus.png new file mode 100644 index 0000000000..5f7668d9b9 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-http-oss.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-http-oss.png new file mode 100644 index 0000000000..f16ae7a5cb Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-http-plus.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-http-plus.png new file mode 100644 index 0000000000..b3223f8370 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-https-oss.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-https-oss.png new file mode 100644 index 0000000000..f16ae7a5cb Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-https-plus.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-https-plus.png new file mode 100644 index 0000000000..b3223f8370 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-down-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-http-oss.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-http-oss.png new file mode 100644 index 0000000000..2ab3c85697 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-http-plus.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-http-plus.png new file mode 100644 index 0000000000..eac8cf5907 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-https-oss.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-https-oss.png new file mode 100644 index 0000000000..2ab3c85697 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-https-plus.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-https-plus.png new file mode 100644 index 0000000000..eac8cf5907 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-http-oss.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-http-oss.png new file mode 100644 index 0000000000..b6051b1b5c Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-http-plus.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-http-plus.png new file mode 100644 index 0000000000..02933d90f5 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-https-oss.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-https-oss.png new file mode 100644 index 0000000000..b6051b1b5c Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-https-plus.png b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-https-plus.png new file mode 100644 index 0000000000..02933d90f5 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/abrupt-scale-up-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-http-oss.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-http-oss.png new file mode 100644 index 0000000000..e1708c6ce6 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-http-plus.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-http-plus.png new file mode 100644 index 0000000000..80a9c30fbe Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-https-oss.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-https-oss.png new file mode 100644 index 0000000000..e1708c6ce6 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-https-plus.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-https-plus.png new file mode 100644 index 0000000000..cf80cd5c96 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-http-oss.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-http-oss.png new file mode 100644 index 0000000000..2d5c2dd7a4 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-http-plus.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-http-plus.png new file mode 100644 index 0000000000..0b1596ef11 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-https-oss.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-https-oss.png new file mode 100644 index 0000000000..2d5c2dd7a4 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-https-plus.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-https-plus.png new file mode 100644 index 0000000000..0b1596ef11 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-down-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-http-oss.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-http-oss.png new file mode 100644 index 0000000000..1b1be59e74 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-http-plus.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-http-plus.png new file mode 100644 index 0000000000..90740a78f8 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-https-oss.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-https-oss.png new file mode 100644 index 0000000000..1b1be59e74 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-https-plus.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-https-plus.png new file mode 100644 index 0000000000..90740a78f8 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-affinity-https-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-http-oss.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-http-oss.png new file mode 100644 index 0000000000..1bfb73cd0b Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-http-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-http-plus.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-http-plus.png new file mode 100644 index 0000000000..3c5cd87911 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-http-plus.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-https-oss.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-https-oss.png new file mode 100644 index 0000000000..1bfb73cd0b Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-https-oss.png differ diff --git a/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-https-plus.png b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-https-plus.png new file mode 100644 index 0000000000..3c5cd87911 Binary files /dev/null and b/tests/results/zero-downtime-scale/2.2.0/gradual-scale-up-https-plus.png differ