Table of Contents
The idea behind this repository is to benchmark different languages implementation of HTTP server.
The application i tested is minimal: the HTTP version of the Hello World example.
This approach allows including languages i barely know, since it is pretty easy to find such implementation online.
If you're looking for more complex examples, you will have better luck with the TechEmpower benchmarks.
Please do take the following numbers with a grain of salt: it is not my intention to promote one language over another basing on micro-benchmarks.
Indeed you should never pick a language just basing on its presumed performance.
I have became lazy with years and just adopt languages i can install via
homebrew, sorry Oracle/MS. This also allows me to benchmark them in a single session, thus trying to use an environment as neutral as possible.
Where possible i just relied on the standard library, but when it is not production-ready (i.e. Ruby, Python).
Ruby 3.0.0 is used. Ruby is a general-purpose, interpreted, dynamic programming language, focused on simplicity and productivity.
Python 3.9.1 is used. Python is a widely used high-level, general-purpose, interpreted, dynamic programming language.
Crystal 0.35.1 is used. Crystal has a syntax very close to Ruby, but brings some desirable features such as statically typing and ahead of time (AOT) compilation.
I used wrk as the loading tool.
I measured each application server six times, picking the best lap (but for VM based languages demanding longer warm-up).
wrk -t 4 -c 100 -d30s --timeout 2000 http://0.0.0.0:9292
These benchmarks are recorded on a MacBook PRO 13 2019 having these specs:
- macOS Catalina
- 1.4 GHz Quad-Core Intel Core i5
- 8 GB 2133 MHz LPDDR3
RAM and CPU
I measured RAM and CPU consumption by using macOS Activity Monitor dashboard and recording max consumption peak.
For the languages relying on pre-forking parallelism i reported the average consumption by taking a snapshot during the stress period.
|Language||App Server||Requests/sec||RAM (MB)||CPU (%)|
|Ruby+MJIT||Puma||36455.88||> 100||> 580|
|Elixir||Plug with Cowboy||46416.25||50.5||583.8|
|Ruby||Puma||47975.36||> 100||> 580|
|Python||Gunicorn with Meinheld||120105.65||> 40||> 380|
RUBYOPT='--jit' puma -w 8 -t 2 --preload servers/rack_server.ru
Gunicorn with Meinheld
cd servers gunicorn -w 4 -k meinheld.gmeinheld.MeinheldWorker -b :9292 wsgi_server:app
I used the cluster module included into Node's standard library.
I used the async HTTP server embedded into the Dart standard library and compiled it with
dart2native AOT compiler.
dart2native servers/dart_server.dart -k aot dartaotruntime servers/dart_server.aot
Plug with Cowboy
cd servers/plug_server MIX_ENV=prod mix compile MIX_ENV=prod mix run --no-halt
I used Crystal HTTP server standard library, enabling parallelism by using the
crystal build -Dpreview_mt --release servers/crystal_server.cr ./crystal_server
To test Nim i opted for the httpbeast library: an asynchronous server relying on Nim HTTP standard library.
nim c -d:release --threads:on servers/httpbeast_server.nim ./servers/httpbeast_server
I used the HTTP ServeMux GO standard library.
go run servers/servemux_server.go