Skip to content

nanoant/WebFrameworkBenchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Web Frameworks Benchmark

The idea behind this benchmark is to re-evaluate results presented by Techempower's Benchmark benchmarking best & promising open-source frameworks.

Why another benchmark?

I just found Techempower's Benchmark sources overcomplicated. Also wanted to test only framework overhead, that's why testing just trivial dynamic Hello World world output.

Just to not raise a controversy, I want also to emphasize this benchmark is simple & native and completely skips some unique features of some frameworks and languages, such as Erlang's natural clustering and hot-swap capabilities.

Results

Language Framework Req/sec1 MB/sec 99% ms2
Java Undertow 616 547 80.55 3.29
C Kore 3 572 782 104.33 3.99
C libmicrohttpd 533 626 69.72 1.28
Go fasthttp 485 185 67.56 5.26
C Onion 4 483 824 90.90 2.82
Java Netty 422 580 40.30 4.08
Nim m&s 6 AsyncHTTPServer 404 040 43.93 3.98
Native Nginx 5 381 368 43.26 24.24
Go net/http 270 253 34.28 2.52
Lua OpenResty 7 269 205 30.28 43.35
C++ Crow 256 552 31.32 12.28
Rust Iron 178 789 19.44 0.05 8
Erlang Cowboy 9 163 521 24.01 5.41
Node HTTP 112 086 13.79 11.98
Nim m&s10 AsyncHTTPServer 86 741 9.43 1.40
Nim m&s10 Jester 11 83 753 5.99 1.50
Ruby Puma 12 83 053 6.02 6.14
D ldc2 13 Vibe.d 0.7.26 79 602 13.28 46.41
D dmd 14 Vibe.d 0.7.26 76 839 12.75 103.05
Nim 15 AsyncHTTPServer 52 843 5.75 4.76
Nim 15 Jester 42 698 3.05 5.42

1 Ubuntu 14.04 LTS, Linux 3.16, Xeon E5-1650 @ 3.50GHz, 32 GB RAM
2 Latency distribution value at 99% in milliseconds (towards worst)

3 Core built without SSL via using make NOTLS=1.
4 Running hello example with static path.
5 Using Nginx echo module.
6 Nim using --gc:markandsweep, pre-forked processes using SO_REUSEPORT.
7 OpenResty is in fact Nginx with Lua module.
8 Rust Iron has some amazing super-stable latency in longer runs.
9 Cowboy requires some low level tweaking via sysctl, see and apply sysctl.conf.
10 Nim using --gc:markandsweep, single-thread only.
11 Jester is some higher-level web framework for Nim.
12 Using several Ruby instances with puma -w 12.
13 D language using LDC2 compiler v0.16.1 (LLVM 3.7.0).
14 D language using standard DMD compiler v2.069.1.
15 Nim using standard RC garbage collection, single-thread only.

NOTE: Detailed results can be found in results/.

Benchmarking details

Each web framework is expected to respond with Hello World content of type text/plain with minimum set of headers required by HTTP 1.1 specification:

$ curl -i localhost:8080
HTTP/1.1 200 OK
Connection: Keep-Alive
Content-Length: 11
Content-Type: text/plain
Date: Tue, 24 Nov 2015 17:32:30 GMT

Hello World

Some frameworks add extra headers such as Server or Expire by default, which are not required by HTTP 1.1 specification. If possible we use some settings or tweaks to remove them as more headers (so more data) will have negative impact on performance.

Effectively this benchmark tests solely the framework overhead itself. We are not testing any database access or JSON serialization performance. We are also avoiding some extra optimizations, such as caching response memory structures which could improve performance a bit, but contradicts dynamic behavior of tested frameworks.

Conclusions

As expected Java solution - Undertow is most optimized. 2nd & 3rd places are occupied by native C frameworks - Kore and libmicrohttpd.

Finally new fasthttp Go solution takes 4th place, being very close to the top 3. Go is very flexible little language, improving productivity and it is used already by many companies to deliver some heavy-load network services.

It has to be also observed that different frameworks generated different amount of data due different HTTP headers being used.

License

This benchmark is provided under MIT license:

Copyright (c) 2015 Adam Strzelecki

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

About

Benchmarking some web frameworks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published