This repository has been archived by the owner on Jan 3, 2018. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 0
Performance
jmervine edited this page Jul 9, 2012
·
7 revisions
Production -- with Diskcached
This test was also run on a production box, and httperf was also run on a home machine.
httperf --client=0/1 --server=www.rubyops.net --port=80 --uri=/ --send-buffer=4096 --recv-buffer=16384 --num-conns=1000 --num-calls=1
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
Maximum connect burst length: 1
Total: connections 1000 requests 1000 replies 1000 test-duration 422.098 s
Connection rate: 2.4 conn/s (422.1 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 402.9 avg 422.1 max 581.4 median 413.5 stddev 24.0
Connection time [ms]: connect 138.3
Connection length [replies/conn]: 1.000
Request rate: 2.4 req/s (422.1 ms/req)
Request size [B]: 68.0
Reply rate [replies/s]: min 2.2 avg 2.4 max 2.6 stddev 0.1 (84 samples)
Reply time [ms]: response 142.2 transfer 141.6
Reply size [B]: header 241.0 content 24281.0 footer 0.0 (total 24522.0)
Reply status: 1xx=0 2xx=1000 3xx=0 4xx=0 5xx=0
CPU time [s]: user 307.67 system 114.44 (user 72.9% system 27.1% total 100.0%)
Net I/O: 56.9 KB/s (0.5*10^6 bps)
Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
Production -- without Diskcached
This test was run on a production box, but httperf was run from a home machine on a cable modem (which may explain the higher response times).
httperf --client=0/1 --server=www.rubyops.net --port=80 --uri=/ --send-buffer=4096 --recv-buffer=16384 --num-conns=10000 --num-calls=4
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
Maximum connect burst length: 1
Total: connections 476 requests 1904 replies 1903 test-duration 1361.380 s
Connection rate: 0.3 conn/s (2860.0 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 2682.1 avg 2860.5 max 3981.1 median 2852.5 stddev 80.2
Connection time [ms]: connect 135.0
Connection length [replies/conn]: 4.006
Request rate: 1.4 req/s (715.0 ms/req)
Request size [B]: 68.0
Reply rate [replies/s]: min 1.0 avg 1.4 max 1.6 stddev 0.1 (272 samples)
Reply time [ms]: response 281.7 transfer 399.6
Reply size [B]: header 241.0 content 71693.0 footer 0.0 (total 71934.0)
Reply status: 1xx=0 2xx=1903 3xx=0 4xx=0 5xx=0
CPU time [s]: user 978.63 system 382.77 (user 71.9% system 28.1% total 100.0%)
Net I/O: 98.3 KB/s (0.8*10^6 bps)
Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
Staging -- 'localhost' with Diskcached
This test was run on "localhost" (no network latency) and as such the NewRelic and httperf results closely match.
httperf --client=0/1 --server=localhost --port=9001 --uri=/ --send-buffer=4096 --recv-buffer=16384 --num-conns=100000 --num-calls=4
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
Maximum connect burst length: 1
Total: connections 100000 requests 200000 replies 100000 test-duration 654.226 s
Connection rate: 152.9 conn/s (6.5 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 2.6 avg 6.5 max 211.1 median 5.5 stddev 4.2
Connection time [ms]: connect 0.0
Connection length [replies/conn]: 1.000
Request rate: 305.7 req/s (3.3 ms/req)
Request size [B]: 62.0
Reply rate [replies/s]: min 145.0 avg 152.9 max 164.0 stddev 4.0 (130 samples)
Reply time [ms]: response 5.5 transfer 0.1
Reply size [B]: header 215.0 content 25634.0 footer 0.0 (total 25849.0)
Reply status: 1xx=0 2xx=100000 3xx=0 4xx=0 5xx=0
CPU time [s]: user 466.75 system 171.25 (user 71.3% system 26.2% total 97.5%)
Net I/O: 3877.0 KB/s (31.8*10^6 bps)
Errors: total 100000 client-timo 0 socket-timo 0 connrefused 0 connreset 100000
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0