Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Benchmarks #4

Open
X4 opened this Issue · 6 comments

4 participants

@X4
X4 commented

Hi Mr. Schwartz

I've read your announcement by luck http://www.sencha.com/forum/showthread.php?160128-Announcing-SilkJS
and found your description of your benchmark results a little misleading. I was curious and tested it myself.

It would make sense to share your:
Machine Specs (+cores)
Kernel parameters (if any)
NIC Bandwidth
File size of test-file (100Byte, 1KB, 512KB 1MB)

so that comparing becomes easier, in the case someone has the same machine/setup. It also helps to optimize your server.

I can recommend weighttp. ab is single-threaded and utilizes only one core/cpu.
Your server doesn't scale linearly, thus varying req/s dependent on req# and concurrence level is normal.
Enabling keep-alive also further improves results.

I get about 4.8k to 5k req/s on a 1.3GHz Core2Duo :) I know it's weak, but hey I wanted to share my results.
On weighttp with the same parameters on a heavily optimized nginx I get 27k req/s, on a heavily optimized lighttpd I get 23k req/s and on G-WAN without optimization I get 56k req/s. I am sorry, I didn't have had the chance to test nodejs yet.

$: ab -t 30 -c 50 -k http://localhost:9090/anchor.png
...
Server Software:        SILK
Server Hostname:        localhost
Server Port:            9090

Document Path:          /anchor.png
Document Length:        523 bytes

Concurrency Level:      50
Time taken for tests:   10.402 seconds
Complete requests:      50000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    50000
Total transferred:      37700000 bytes
HTML transferred:       26150000 bytes
Requests per second:    4806.58 [#/sec] (mean)
Time per request:       10.402 [ms] (mean)
Time per request:       0.208 [ms] (mean, across all concurrent requests)
Transfer rate:          3539.22 [Kbytes/sec] received

Connection Times (ms)
          min  mean[+/-sd] median   max
Connect:        0    1  50.3      0    3005
Processing:     0   10   9.5      8     176
Waiting:        0   10   9.5      8     176
Total:          0   10  52.0      8    3094

Percentage of the requests served within a certain time (ms)
  50%      8
  66%     12
  75%     15
  80%     17
  90%     21
  95%     25
  98%     30
  99%     32
 100%   3094 (longest request)



$: weighttp -n 100000 -c 100 -t 2 -k "http://localhost:9090/anchor.png"
...
finished in 19 sec, 787 millisec and 667 microsec, 5053 req/s, 3721 kbyte/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored
traffic: 75400000 bytes total, 23100000 bytes http, 52300000 bytes data

Btw. apache bench ignores the -t flag ;)

I think using 250 workers is a little naive, because the time lost for context-switches is enormous, it's better to map threads to cpu's. But that's my humble opinion, tell me when I'm wrong :)
On a 6Core XEON processor for example you can use up to 10 pthreads after that you won't notice an improvement, but a slow decrease in performance.

Cheers!

@mschwartz
Owner
@X4
X4 commented

Thank you for giving a quick response :)

I'd also point out that there are no pthreads in SilkJS, just pure OS processes.
Oh yes I know, I saw in gdb that gwan uses pthreads for example and I know that pthreads have become very lightweight, compared to earlier.

Ok, sorry I didn't know you can configure the number of children.

Allright, I can benchmark Apache, NodeJS etc. soon and release the results in a paste. It'll be an apples vs oranges benchmark though, because gwan, nodejs and silkjs are application servers and nginx, lighttp and apache are pure servers.
I was just noting that you can further optimize your server :) Checkout https://github.com/vendu/OS-Zero/ the zmallock implementation there is pretty efficient, I've been told that it's even faster than jemalloc.

@mschwartz
Owner
@nathanaschbacher

You could run V8 Isolates in a pthread like threads_a_gogo does in Node. No?

@mschwartz
Owner

I saw this about NodeJS:

https://groups.google.com/forum/?fromgroups#!topic/nodejs/zLzuo292hX0

Seems they wanted to implement V8 Isolates, then backed all that code out of the main code base.

From what I've read about Isolates, you still need to Locker around entering JavaScript context, so you end up with a big contention for the lock.

SilkJS was originially entirely pthread based, but for C++ pages (not JavaScript). I truly wish V8 had the ability to have multiple threads concurrently running in the same context. There would be no preforking in that case, just pre-threading.

@coderbuzz

Here is my quick benchmarks

HP ProBook 4420s - Intel i5 CPU 2.67GHz, 4.00 GB RAM
Debian Crunchbang Linux x32

$ ab -t 30 -c 50 -k http://127.0.0.1/anchor.png
Apache/2.2.22 (Debian) Server at 127.0.0.1 Port 80

  • Requests per second: 8421.41 #/sec

$ ab -t 30 -c 50 -k http://127.0.0.1:9090/anchor.png
SilkJS Server at 127.0.0.1 Port 9090

  • Requests per second: 8752.05 #/sec

$ ab -t 30 -c 50 -k http://127.0.0.1:8000/anchor.png
Nodejs Server at 127.0.0.1 Port 8000

  • Requests per second: 2117.88 #/sec

$ ab -t 30 -c 50 -k http://127.0.0.1:8000/anchor.png
Nodejs Server at 127.0.0.1 Port 8000 - Cluster 4 Core CPU

  • Requests per second: 4274.60 #/sec

*UPDATE:

$ ab -t 30 -c 50 -k http://127.0.0.1:8080/anchor.png
G-WAN Server at 127.0.0.1 Port 8080
Requests per second: 84900.89 #/sec

HP ProBook 4420s - Intel i5 CPU 2.67GHz, 4.00 GB RAM
Windows 8 x64

ab -t 30 -c 50 -k http://127.0.0.1:9000/anchor.png
Pashero 32bit Server at 127.0.0.1 Port 9000

  • Requests per second: 11034.23 #/sec
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.