Skip to content
This repository

Performance test #38

Closed
zzzteph opened this Issue May 04, 2012 · 21 comments

7 participants

zzzteph Sönke Ludwig Johannes Pfau btko02 David Nadlinger Chase Colman davidSky
zzzteph
zzzteph commented May 04, 2012

Hi everyone!
I just tested vibe.d vs nodejs and got pity results...
I wrote (Ctrl+C) two simples apps:
(Vibe.d)
void handleRequest(HttpServerRequest req, HttpServerResponse res)
{ res.writeBody(cast(ubyte[])"Hello, World!", "text/plain");}
static this()
{ setLogLevel(LogLevel.Trace);
auto settings = new HttpServerSettings;
settings.port = 8080;
listenHttp(settings, &handleRequest);}
And nodejs
var sys = require('sys'),
http = require('http');
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write('

Hello World

');
res.end();
}).listen(8080);
Test with
ab -r -n 100000 -c 1000 http://127.0.0.1:8080/
And got results(vibe.d):
Concurrency Level: 1000
Time taken for tests: 515.134 seconds
Complete requests: 100000
Failed requests: 7662
(Connect: 0, Receive: 2554, Length: 2554, Exceptions: 2554)
Write errors: 0
Total transferred: 13155885 bytes
HTML transferred: 1266863 bytes
Requests per second: 194.12 #/sec
Time per request: 5151.341 ms
Time per request: 5.151 ms
Transfer rate: 24.94 [Kbytes/sec] received

Results(nodejs):
Concurrency Level: 1000
Time taken for tests: 16.212 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 8100000 bytes
HTML transferred: 1800000 bytes
Requests per second: 6168.42 #/sec
Time per request: 162.116 ms
Time per request: 0.162 ms
Transfer rate: 487.93 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 97 640.7 0 9028
Processing: 0 23 50.2 20 9831
Waiting: 0 22 50.2 20 9831
Total: 1 120 648.4 21 9831

I understand that vibe is very young project , but my I did something wrong?

Sönke Ludwig
Owner

I have tested two days ago and got around 6400 req/s, although this was with -c 100 instead of -c 1000 because the connection queue was filled up to fast otherwise (over a 1 GB connection). The test was on Windows and the results may vary on other platforms. Can you run your test with "vibe verbose" on the current version and see what backend libevent is using? It will output "libevent is using ... for events." at startup.

One thing that I'm planning for the near future is to add a libev and/or libuv driver, as well as a WinRT driver later. According to benchmarks this should give some considerable improvements in high-load situations.

zzzteph
zzzteph commented May 04, 2012

I've got nothing about libevent at startup, only
vibe verbose
F80:00000000 INF] Updating application in '/home/steph/dapi'
[B731AF80:00000000 INF] You are up to date
[B7295F80:00000000 WRN] Failed to parse config file /etc/vibe/vibe.conf: /etc/vibe/vibe.conf: No such file or directory
[B7295F80:00000000 INF] Listening on 0.0.0.0 port 8080 succeeded
[B7295F80:00000000 ERR] Error binding listening socket
[B7295F80:00000000 INF] Listening on :: port 8080 failed
[B7295F80:00000000 INF] Running event loop...
Also I tested both on Linux.

Sönke Ludwig
Owner

Sorry, you are right, the libevent initialization happens before the verbose flag gets into action. Can you change source/vibe/core/drivers/libevent2.d line 52 from logDebug to logInfo instead and try again? I will test on a Linux machine tomorrow and see what I get there.

zzzteph
zzzteph commented May 04, 2012

vibe verbose
[B734EF80:00000000 INF] libevent version: 2.0.18-stable
[B734EF80:00000000 INF] libevent is using epoll for events.
[B734EF80:00000000 INF] Updating application in '/home/steph/dapi'
[B734EF80:00000000 INF] You are up to date
[B721DF80:00000000 INF] libevent version: 2.0.18-stable
[B721DF80:00000000 INF] libevent is using epoll for events.

Sönke Ludwig
Owner

Okay, my test results are below (benchmark machine is an AMD Phenom 2 X4 925 (2,8 GHz), Ubuntu 12.04, epoll).
Whats noticable is that the number of exceptionally long/failed requests is much larger in case of 1000 concurrent connections. However, the numbers, although still far from optimal, are also far better than the 194 #/s so I'm not sure why it went so bad there. I would guess that there is a bug in either libevent or in the way I'm using it that causes the failing requests. The fact that VPM often hangs on Linux while downloading is another sign.

I will write a libev based back end in the coming days and see what that changes.

Using an Atom Netbook over 100MBit net for ab:

ab -r -n 100000 -c 100 http://mainframe:8080/static/1k
-> 3800 #/sec
-> 4,1 MB/s
-> 1% requests took 1s-3s

ab -r -n 100000 -c 1000 http://mainframe:8080/static/1k
-> 3500 #/sec
-> 3,8 MB/s
-> 1% requests took 3s-17s

ab -r -n 100000 -c 1000 http://mainframe:8080/static/10k
-> 1100 #/sec
-> 11 MB/s
-> 1% requests took 1-16s

ab -r -n 100000 -c 1000 http://mainframe:8080/static/10k
-> 750 #/sec
-> 7,4 MB/s
-> 1% requests took 10s-46s

ab -r -n 100000 -c 100 http://mainframe:8080/file/1k
-> 3300 #/sec
-> 4,2 MB/s
-> 1% requests took 1s-3s

Using loopback:

ab -r -n 100000 -c 10 http://127.0.0.1:8080/static/1k
-> 8400 #/sec
-> 9,2 MB/s
-> 1% requests took 3ms-221ms, 99% took <=3ms

ab -r -n 100000 -c 100 http://127.0.0.1:8080/static/1k
-> 8000 #/sec
-> 8,8 MB/s
-> 1% requests took 4ms-6s, 99% took <=4ms

ab -r -n 100000 -c 1000 http://127.0.0.1:8080/static/1k
-> 3300 #/sec
-> 3,5 MB/s
-> 1% requests took 3s-29s, 95% took <=4ms

ab -r -n 100000 -c 100 http://127.0.0.1:8080/file/1k
-> 5000 #/sec
-> 6,3 MB/s
-> 1% requests took 1s-8s, 98% took <=4ms
Johannes Pfau
jpf91 commented May 06, 2012

Off topic: 'libev based back end'

Are you going to use https://github.com/D-Programming-Deimos/libev ? I'm maintaining those bindings, so if there are any problems just let me know. (I should probably update the binding to version 4.11)

Sönke Ludwig
Owner

I was about to generate one myself ;) Thanks, I will try them.

Sönke Ludwig
Owner

I have made some tests with libev and unfortunately wasn't able to get better results than with libevent - I'm probably doing something wrong. But since the documentation and example situation is quite disappointing, I'm currently leaning towards directly implementing drivers based on epoll, win32 and kqueue since it's a lot more obvious what happens there if something is slow or broken.

However, there are a number of small optimizations in the current master version. Im getting the following results with libevent on the same AMD machine - ethernet is GbE this time.

ab -r -n 100000 -c 100 http://127.0.0.1:8080/empty
-> 19200 #/sec
-> median 5ms

ab -r -n 100000 -c 1000 http://127.0.0.1:8080/empty
-> 12700 #/sec
-> median 7ms

ab -r -n 100000 -c 100 http://192.168.102.106:8080/empty
-> 10200 #/sec
-> median 8ms

ab -r -n 100000 -c 1000 http://192.168.102.106:8080/empty
-> 9300 #/sec
-> median 26ms

With a load-balancer and 4 vibe.d processes 40Kreq/s schould be doable on a 4 core machine. Built-in support for multithreading is planned for 0.9/1.0.

If the results on certain machines are still exceptionally slow, it would be interesting to see the full log of "ab" and the machine specs.

zzzteph
zzzteph commented May 13, 2012

I've tested (on Arch linux) benchmark example and got really good numbers:
ab -r -n 100000 -c 100 http://127.0.0.1:8080/static/10k

Server Software:        vibe.d/0.8
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /static/10k
Document Length:        10000 bytes

Concurrency Level:      100
Time taken for tests:   11.217 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      1012500000 bytes
HTML transferred:       1000000000 bytes
Requests per second:    8915.19 [#/sec] (mean)
Time per request:       11.217 [ms] (mean)
Time per request:       0.112 [ms] (mean, across all concurrent requests)
Transfer rate:          88150.68 [Kbytes/sec] received

ab -r -n 100000 -c 1000 http://127.0.0.1:8080/static/10k

Server Software:        vibe.d/0.8
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /static/10k
Document Length:        10000 bytes

Concurrency Level:      1000
Time taken for tests:   12.767 seconds
Complete requests:      100000
Failed requests:        423
   (Connect: 0, Receive: 141, Length: 141, Exceptions: 141)
Write errors:           0
Total transferred:      1011072375 bytes
HTML transferred:       998590000 bytes
Requests per second:    7832.54 [#/sec] (mean)
Time per request:       127.672 [ms] (mean)
Time per request:       0.128 [ms] (mean, across all concurrent requests)
Transfer rate:          77336.58 [Kbytes/sec] received

**nodejs

ab -r -n 100000 -c 100 http://127.0.0.1:8080/

Server Software:        
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /
Document Length:        9 bytes

Concurrency Level:      1000
Time taken for tests:   15.082 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      7200000 bytes
HTML transferred:       900000 bytes
Requests per second:    6630.31 [#/sec] (mean)
Time per request:       150.822 [ms] (mean)
Time per request:       0.151 [ms] (mean, across all concurrent requests)
Transfer rate:          466.19 [Kbytes/sec] received

ab -r -n 100000 -c 1000 http://127.0.0.1:8080/

Server Software:        
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /
Document Length:        9 bytes

Concurrency Level:      100
Time taken for tests:   15.094 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      7200000 bytes
HTML transferred:       900000 bytes
Requests per second:    6625.27 [#/sec] (mean)
Time per request:       15.094 [ms] (mean)
Time per request:       0.151 [ms] (mean, across all concurrent requests)
Transfer rate:          465.84 [Kbytes/sec] received

btko02
btko02 commented May 13, 2012

Very cool :) Tested on Ubuntu 12.04 and compare with Python Tornado Web and Node.js.

###########################################
vibe.d simplehttp: ab -n 10000 -c 1000 http://localhost:8080/
Requests per second: 12445.40 #/sec

###########################################
vibe.d/benchmark/ with diet, but response localhost.dms files and errors !!!
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
apr_socket_recv: Connection reset by peer (104)
Total of 9969 requests completed

###########################################
Node.js: ab -n 10000 -c 1000 http://localhost:1337/
Requests per second: 5447.63 #/sec

###########################################
Tornado Web: ab -n 10000 -c 1000 http://localhost:8888/
Requests per second: 2703.21 #/sec

David Nadlinger

@btko02: By the way, you can use triple backticks (```) to denote preformatted/code blocks that GitHub's Markdown parser should leave alone – this usually increases readability for console pastes.

btko02
btko02 commented May 13, 2012

Thank you, klickverbot. But, I want to use bold.

vibe.d vs G-WAN (free, closed source)

###########################################
G-WAN
ab -n 10000 -c 1000 http://127.0.0.1:8080/?hello.c

Server Software: G-WAN
Server Hostname: 127.0.0.1
Server Port: 8080

Document Path: /?hello.c
Document Length: 12 bytes

Concurrency Level: 1000
Time taken for tests: 1.104 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 2820000 bytes
HTML transferred: 120000 bytes
Requests per second: 9060.49 #/sec
Time per request: 110.369 ms
Time per request: 0.110 ms
Transfer rate: 2495.17 [Kbytes/sec] received

###########################################
vibe.d
ab -n 10000 -c 1000 http://127.0.0.1:8080/
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8080

Document Path: /
Document Length: 13 bytes

Concurrency Level: 1000
Time taken for tests: 0.737 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 1310000 bytes
HTML transferred: 130000 bytes
Requests per second: 13564.77 #/sec
Time per request: 73.720 ms
Time per request: 0.074 ms
Transfer rate: 1735.34 [Kbytes/sec] received

vibe.d is the winner :)
Chase Colman
chase commented May 14, 2012

@btko02 This might make Pierre a bit upset, haha. What is the memory usage of G-WAN vs Vibe.d in those benchmarks?

davidSky

@btko02 your test is inaccurate. Look at "Total transferred" gwan delivered twice more bytes... to make a fair test, print equal amount of bytes. Also I'd suggest to use gwan's handlers instead of servlets.

Vibed team: great improvement! From 500+ to 10K+ requests per second in less than two weeks -- incredible stuff! Break some records and you'll get vibed (and D) famous ;-)

zzzteph
zzzteph commented May 17, 2012

I found out what was wrong in test (my first post and may be others):

 setLogLevel(LogLevel.Trace);

With this function vibe.d became slowest framework all over the time.

btko02
btko02 commented May 17, 2012

@zzzteph I agree with you! In my tests:

//setLogLevel(LogLevel.Trace);

:)

btko02
btko02 commented May 17, 2012

@davidSky Can you share gwan's handlers?

David Nadlinger

I haven't looked at the issue in detail, but maybe adding a trace-level startup log message indicating that poor performance is to be expected could help prevent similar misunderstanding in the future?

davidSky

@btko02 I'm not sure what you want me to share... the handlers are part of gwan, there are some examples coming with gwan's archive, e.g. /gwan/0.0.0.0_80/#0.0.0.0/handlers/"main__xyz.c__" just rename it to main.c and restart gwan then open 127.0.0.1 you'll see "Hello World" message without the need of calling /?scp=xyz (RTFM!)

It's actually very good (and very fast) solution for routing, reverse-proxy and other similar tasks... Right now I'm using nginx for those tasks, but probably going to make a move for gwan and hopefully vibed later.

Sönke Ludwig
Owner

Re: LogLevel.Trace
I removed the setLogLevel() call from all examples now. They were only there because the examples were used for debugging, but they should have never been in the repository.

Sönke Ludwig s-ludwig closed this May 20, 2012
Sönke Ludwig
Owner

In the latest 0.7.3 release, a long-standing bug was fixed where some connections were not handled. As a side effect, I now get the following numbers with no failed requests:

ab -r -n 100000 -c 1000 http://127.0.0.1:8080/static/1k
-> 12700 #/sec
-> median 8ms

ab -r -n 100000 -c 100 http://127.0.0.1:8080/static/1k
-> 17600 #/sec
-> median 5ms

looking good!

(edit: There was an error in the benchmark that skewed the numbers, the current ones are now correct with the important fact, that there are no failed requests anymore)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.