Description
Is there an existing issue that is already proposing this?
- I have searched the existing issues
NestJS version
11.0.13
Is your performance suggestion related to a problem? Please describe it
Currently there is no way to define JsonSchema of the response when using fastify adapter. Fastify is using their own implementation of JSON.stringify that is more performant. However, most gains are achieved by providing the response schema that helps with serialization:
"Fastify uses fast-json-stringify to send data as JSON if an output schema is provided in the route options. Using an output schema can drastically increase throughput and help prevent accidental disclosure of sensitive information."
Describe the performance enhancement you are proposing and how we can try it out
I propose adding a new decorator RouteSchema
that allows passing a schema object that aligns with fastify's route schema
option doc
This will allow for opt-in solution that does not affect users not willing to touch the schema and is backward compatible
Benchmarks result or another proof (eg: POC)
Using this PR: #14789
Exact same endpoint with only difference being presence of schema. Return an array of object where size of array is set with limit
parameter
❯ wrk -t12 -c400 -d30s http://localhost:3000/cats/no-schema\?limit\=10 30s Py base 13:15:44
Running 30s test @ http://localhost:3000/cats/no-schema?limit=10
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 8.55ms 20.72ms 721.66ms 99.55%
Req/Sec 2.69k 666.13 7.14k 63.00%
963864 requests in 30.10s, 0.98GB read
Socket errors: connect 157, read 197, write 0, timeout 0
Requests/sec: 32018.79
Transfer/sec: 33.31MB
❯ wrk -t12 -c400 -d30s http://localhost:3000/cats/schema\?limit\=10 30s Py base 13:18:58
Running 30s test @ http://localhost:3000/cats/schema?limit=10
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.34ms 9.66ms 538.97ms 99.66%
Req/Sec 2.86k 844.25 12.85k 58.12%
1026043 requests in 30.11s, 1.04GB read
Socket errors: connect 157, read 109, write 0, timeout 0
Requests/sec: 34082.03
Transfer/sec: 35.46MB
❯ wrk -t12 -c400 -d30s http://localhost:3000/cats/no-schema\?limit\=500 30s Py base 13:10:28
Running 30s test @ http://localhost:3000/cats/no-schema?limit=500
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 91.94ms 60.54ms 1.96s 94.63%
Req/Sec 195.42 67.25 575.00 72.17%
70111 requests in 30.08s, 3.04GB read
Socket errors: connect 157, read 549, write 1, timeout 143
Requests/sec: 2330.69
Transfer/sec: 103.50MB
❯ wrk -t12 -c400 -d30s http://localhost:3000/cats/schema\?limit\=500 30s Py base 13:13:42
Running 30s test @ http://localhost:3000/cats/schema?limit=500
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 77.15ms 60.45ms 1.98s 98.06%
Req/Sec 242.30 76.96 545.00 66.23%
86986 requests in 30.10s, 3.77GB read
Socket errors: connect 157, read 518, write 3, timeout 131
Requests/sec: 2889.95
Transfer/sec: 128.34MB
❯ wrk -t12 -c400 -d30s http://localhost:3000/cats/no-schema\?limit\=1000 30s Py base 13:14:23
Running 30s test @ http://localhost:3000/cats/no-schema?limit=1000
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 146.84ms 68.28ms 1.95s 86.23%
Req/Sec 102.87 36.01 313.00 70.78%
36958 requests in 30.09s, 3.20GB read
Socket errors: connect 157, read 517, write 3, timeout 169
Requests/sec: 1228.06
Transfer/sec: 108.96MB
❯ wrk -t12 -c400 -d30s http://localhost:3000/cats/schema\?limit\=1000 30s Py base 13:15:08
Running 30s test @ http://localhost:3000/cats/schema?limit=1000
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 131.42ms 64.31ms 1.97s 88.08%
Req/Sec 124.78 52.96 310.00 63.03%
43661 requests in 30.10s, 3.78GB read
Socket errors: connect 157, read 431, write 0, timeout 162
Requests/sec: 1450.58
Transfer/sec: 128.70MB
@ MacBook Pro M1 Max