Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
chttp refactor #211
Seems to yield a solid ~20% speedup (vs 110k I was seeing previously).
Running 1s test @ http://localhost:8123 4 threads and 128 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.36ms 2.47ms 45.41ms 94.89% Req/Sec 33.03k 2.93k 39.92k 75.00% 144548 requests in 1.10s, 5.24MB read Requests/sec: 131320.48 Transfer/sec: 4.76MB
Got another 10k out of it.
Running 1s test @ http://localhost:8123 4 threads and 128 connections Thread Stats Avg Stdev Max +/- Stdev Latency 0.90ms 39.82us 1.92ms 84.22% Req/Sec 35.29k 552.40 35.99k 80.00% 140363 requests in 1.00s, 5.09MB read Requests/sec: 140194.07 Transfer/sec: 5.08MB
@twof that was using Vapor 3 (not the HTTP Engine). The framework has a bit more overhead, and the situation I described was a lot more complex than a static response. It was using a few DB queries, and a larger (10 KB I think) JSON blob.
The 57K was argued as a "I didn't manage to drop below this point).
I ran the above tests with the following results:
FastHTTP is adding a content-type header that my test is not, so that puts this test slightly out of balance. I am surprised by our consistency of the performance.