-
Notifications
You must be signed in to change notification settings - Fork 284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance test #38
Comments
|
I have tested two days ago and got around 6400 req/s, although this was with -c 100 instead of -c 1000 because the connection queue was filled up to fast otherwise (over a 1 GB connection). The test was on Windows and the results may vary on other platforms. Can you run your test with "vibe verbose" on the current version and see what backend libevent is using? It will output "libevent is using ... for events." at startup. One thing that I'm planning for the near future is to add a libev and/or libuv driver, as well as a WinRT driver later. According to benchmarks this should give some considerable improvements in high-load situations. |
|
I've got nothing about libevent at startup, only |
|
Sorry, you are right, the libevent initialization happens before the verbose flag gets into action. Can you change source/vibe/core/drivers/libevent2.d line 52 from logDebug to logInfo instead and try again? I will test on a Linux machine tomorrow and see what I get there. |
|
vibe verbose |
|
Okay, my test results are below (benchmark machine is an AMD Phenom 2 X4 925 (2,8 GHz), Ubuntu 12.04, epoll). I will write a libev based back end in the coming days and see what that changes. Using an Atom Netbook over 100MBit net for ab: Using loopback: |
|
Off topic: 'libev based back end' Are you going to use https://github.com/D-Programming-Deimos/libev ? I'm maintaining those bindings, so if there are any problems just let me know. (I should probably update the binding to version 4.11) |
|
I was about to generate one myself ;) Thanks, I will try them. |
|
I have made some tests with libev and unfortunately wasn't able to get better results than with libevent - I'm probably doing something wrong. But since the documentation and example situation is quite disappointing, I'm currently leaning towards directly implementing drivers based on epoll, win32 and kqueue since it's a lot more obvious what happens there if something is slow or broken. However, there are a number of small optimizations in the current master version. Im getting the following results with libevent on the same AMD machine - ethernet is GbE this time. With a load-balancer and 4 vibe.d processes 40Kreq/s schould be doable on a 4 core machine. Built-in support for multithreading is planned for 0.9/1.0. If the results on certain machines are still exceptionally slow, it would be interesting to see the full log of "ab" and the machine specs. |
|
I've tested (on Arch linux) benchmark example and got really good numbers: ab -r -n 100000 -c 1000 http://127.0.0.1:8080/static/10k **nodejs ab -r -n 100000 -c 100 http://127.0.0.1:8080/ ab -r -n 100000 -c 1000 http://127.0.0.1:8080/ |
|
Very cool :) Tested on Ubuntu 12.04 and compare with Python Tornado Web and Node.js. ########################################### ########################################### ########################################### ########################################### |
|
@btko02: By the way, you can use triple backticks (```) to denote preformatted/code blocks that GitHub's Markdown parser should leave alone – this usually increases readability for console pastes. |
|
Thank you, klickverbot. But, I want to use _bold_. _vibe.d_ vs _G-WAN_ (free, closed source) ########################################### Server Software: G-WAN Document Path: /?hello.c Concurrency Level: 1000 ########################################### Document Path: / Concurrency Level: 1000 |
|
@btko02 This might make Pierre a bit upset, haha. What is the memory usage of G-WAN vs Vibe.d in those benchmarks? |
|
@btko02 your test is inaccurate. Look at "Total transferred" gwan delivered twice more bytes... to make a fair test, print equal amount of bytes. Also I'd suggest to use gwan's handlers instead of servlets. Vibed team: great improvement! From 500+ to 10K+ requests per second in less than two weeks -- incredible stuff! Break some records and you'll get vibed (and D) famous ;-) |
|
I found out what was wrong in test (my first post and may be others): With this function vibe.d became slowest framework all over the time. |
|
@zzzteph I agree with you! In my tests: :) |
|
@davidsky Can you share gwan's handlers? |
|
I haven't looked at the issue in detail, but maybe adding a trace-level startup log message indicating that poor performance is to be expected could help prevent similar misunderstanding in the future? |
|
@btko02 I'm not sure what you want me to share... the handlers are part of gwan, there are some examples coming with gwan's archive, e.g. /gwan/0.0.0.0_80/#0.0.0.0/handlers/"main__xyz.c__" just rename it to main.c and restart gwan then open 127.0.0.1 you'll see "Hello World" message without the need of calling /?scp=xyz (RTFM!) It's actually very good (and very fast) solution for routing, reverse-proxy and other similar tasks... Right now I'm using nginx for those tasks, but probably going to make a move for gwan and hopefully vibed later. |
|
Re: LogLevel.Trace |
|
In the latest 0.7.3 release, a long-standing bug was fixed where some connections were not handled. As a side effect, I now get the following numbers with no failed requests: looking good! (edit: There was an error in the benchmark that skewed the numbers, the current ones are now correct with the important fact, that there are no failed requests anymore) |
|
Looking at the performance figures here (https://github.com/nanoant/WebFrameworkBenchmark)... I'm beginning to wonder if u you guys made up the figures in your benchmarks |
|
Looks like it's running on distribute and avoiding manual memory management. I wonder what the figures would be without that contention. It's better off running a process for each core like postgresql |
|
Could someone please make the necessary pulls for the project? They seem to be more serious and consistent and updated compared to Techempower Framework Benchmarks.... I'm just a beginner, so there's little I can contribute to that project |
|
I'm not sure what you mean. Sorry if I'm being superficial, but this project with 13 stars is more serious than Techempower with 2,000 stars? |
|
Yes that's exactly what I mean... I don't know if you've been following TFB, and if you've been... I don't know how long, but there's a significant lot of inconsistencies with TFB, to list a few;
GitHub Stars don't really matter when QUALITY & CONSISTENCY is in perspective... besides, that's the point of there always being something new... Go, Rust, NodeJS, etc |
Hi everyone!
I just tested vibe.d vs nodejs and got pity results...
I wrote (Ctrl+C) two simples apps:
(Vibe.d)
void handleRequest(HttpServerRequest req, HttpServerResponse res)
{ res.writeBody(cast(ubyte[])"Hello, World!", "text/plain");}
static this()
{ setLogLevel(LogLevel.Trace);
auto settings = new HttpServerSettings;
settings.port = 8080;
listenHttp(settings, &handleRequest);}
And nodejs
var sys = require('sys'),
http = require('http');
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write('
Hello World
');res.end();
}).listen(8080);
Test with
ab -r -n 100000 -c 1000 http://127.0.0.1:8080/
And got results(vibe.d):
Concurrency Level: 1000
Time taken for tests: 515.134 seconds
Complete requests: 100000
Failed requests: 7662
(Connect: 0, Receive: 2554, Length: 2554, Exceptions: 2554)
Write errors: 0
Total transferred: 13155885 bytes
HTML transferred: 1266863 bytes
Requests per second: 194.12 #/sec
Time per request: 5151.341 ms
Time per request: 5.151 [ms](mean, across all concurrent requests)
Transfer rate: 24.94 [Kbytes/sec] received
Results(nodejs):
Concurrency Level: 1000
Time taken for tests: 16.212 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 8100000 bytes
HTML transferred: 1800000 bytes
Requests per second: 6168.42 #/sec
Time per request: 162.116 ms
Time per request: 0.162 [ms](mean, across all concurrent requests)
Transfer rate: 487.93 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 97 640.7 0 9028
Processing: 0 23 50.2 20 9831
Waiting: 0 22 50.2 20 9831
Total: 1 120 648.4 21 9831
I understand that vibe is very young project , but my I did something wrong?
The text was updated successfully, but these errors were encountered: