Permalink
Browse files

benchmark updates

* use latest Darner
* now that our bencher works better with Kestrel, try even higher concurrency
* express queue packing X-axis as bytes instead of items
* go even further in X-axis of memory resident, queue flooding, queue packing
  • Loading branch information...
1 parent 8bc2025 commit 7e5f5bd125022b50e8d57f5d845570066a6c4733 @erikfrey erikfrey committed Aug 26, 2012
View
@@ -16,26 +16,26 @@ echo "done."
echo -ne "flush db_bench\r\n" | nc localhost 22134 >/dev/null
./db -p 22133 -s 512 -g 0 >/dev/null
-for i in 1 2 5 10 50 100 200 300
+for i in 1 2 5 10 50 100 200 300 400 600 800 1000 2000 4000 6000 8000 10000
do
sync # just in case dirty pages are lying around, don't leak across each run
- printf "kestrel %4i conns: " "$i"
+ printf "kestrel %5i conns: " "$i"
./db -p 22133 -s 100000 -g 100000 -c $i | grep -i "requests per second" | awk -F" " '{print $2}'
done
echo -ne "flush db_bench\r\n" | nc localhost 22134 >/dev/null
./db -p 22134 -s 512 -g 0 >/dev/null
-for i in 1 2 5 10 50 100 200 300 400 600 800 1000
+for i in 1 2 5 10 50 100 200 300 400 600 800 1000 2000 4000 6000 8000 10000
do
sync
- printf "darner %4i conns: " "$i"
+ printf "darner %5i conns: " "$i"
./db -p 22134 -s 100000 -g 100000 -c $i | grep -i "requests per second" | awk -F" " '{print $2}'
done
-for i in 1 2 5 10 50 100 200 300 400 600 800 1000
+for i in 1 2 5 10 50 100 200 300 400 600 800 1000 2000 4000 6000 8000 10000
do
sync
- printf "memcache %4i conns: " "$i"
+ printf "memcache %5i conns: " "$i"
./db -p 11211 -s 100000 -g 100000 -c $i | grep -i "requests per second" | awk -F" " '{print $2}'
done
View
@@ -5,17 +5,17 @@
# darner is on port 22134
# before running this test, be sure to delete the db_bench queue and restart both services
-for i in 0 1024 2048 4096 8192 16384 32768 65536 131072 262024
+for i in 0 1024 2048 4096 8192 16384 32768 65536 131072 262024 524048
do
- printf "kestrel %6i requests: " "$i"
+ printf "kestrel %7i requests: " "$i"
./db -p 22133 -s $i -g 0 -i 1024 >/dev/null
./db -p 22133 -s 0 -g $i -i 1024 >/dev/null
pgrep -f "/opt/kestrel/kestrel" | xargs -I'{}' sudo cat /proc/{}/status | grep -i vmrss | awk '{print $2, $3}'
done
-for i in 0 1024 2048 4096 8192 16384 32768 65536 131072 262024
+for i in 0 1024 2048 4096 8192 16384 32768 65536 131072 262024 524048
do
- printf "darner %6i requests: " "$i"
+ printf "darner %7i requests: " "$i"
./db -p 22134 -s $i -g 0 -i 1024 >/dev/null
./db -p 22134 -s 0 -g $i -i 1024 >/dev/null
pgrep darner | xargs -I'{}' sudo cat /proc/{}/status | grep -i vmrss | awk '{print $2, $3}'
View
@@ -16,19 +16,18 @@ echo -ne "flush db_bench\r\n" | nc localhost 22133 >/dev/null
sync # don't leak across benchmarks
-for i in 0 4096 16384 65536 262144 1048576 4194304
+for i in 0 1024 16384 65536 262144 1048576 4194304
do
./db -p 22133 -s $i -g 0 -i 1024 >/dev/null
- printf "kestrel %7i sets: " "$i"
+ printf "kestrel %8i sets: " "$i"
./db -p 22133 -s 100000 -g 100000 -i 1024 | grep -i "requests per second" | awk -F" " '{print $2}'
done
sync
-for i in 0 4096 16384 65536 262144 1048576 4194304
+for i in 0 1024 16384 65536 262144 1048576 4194304
do
./db -p 22134 -s $i -g 0 -i 1024 >/dev/null
- printf "darner %7i sets: " "$i"
+ printf "darner %8i sets: " "$i"
./db -p 22134 -s 100000 -g 100000 -i 1024 | grep -i "requests per second" | awk -F" " '{print $2}'
done
-
View
@@ -2,39 +2,43 @@ Benchmark Details:
* [Amazon EC2 m1.large](http://aws.amazon.com/ec2/instance-types/): 7.5GB memory, 2 virtual cores
* 64-bit Ubuntu 11.10
-* Darner 0.0.1, compiled Boost 1.46 and leveldb 1.5.0
+* Darner 0.1.3, compiled Boost 1.46 and leveldb 1.2.0
* Kestrel 2.2.0 with OpenJDK 1.6
# Resident Memory
How much memory does the queue server use? We are testing both steady-state memory resident, and also how aggressively
the server acquires and releases memory as queues expand and contract. We tuned Kestrel's JVM down to the smallest
-heap that didn't cause OOM's and didn't impact performance: `-Xmx256m -Xms256m`.
+heap that didn't cause OOM's and didn't impact performance: `-Xmx512m`.
![Resident Memory Benchmark](/wavii/darner/raw/master/docs/images/bench_memory_resident.png)
```
-ubuntu@domU-12-31-39-0E-0C-72:~/darner$ bench/mem_rss.sh
-kestrel 0 requests: 72120 kB
-kestrel 1024 requests: 82348 kB
-kestrel 2048 requests: 113232 kB
-kestrel 4096 requests: 143220 kB
-kestrel 8192 requests: 152852 kB
-kestrel 16384 requests: 171208 kB
-kestrel 32768 requests: 207528 kB
-kestrel 65536 requests: 266344 kB
-kestrel 131072 requests: 366024 kB
-kestrel 262024 requests: 369688 kB
-darner 0 requests: 2216 kB
-darner 1024 requests: 3984 kB
-darner 2048 requests: 6236 kB
-darner 4096 requests: 10732 kB
-darner 8192 requests: 15956 kB
-darner 16384 requests: 20028 kB
-darner 32768 requests: 13324 kB
-darner 65536 requests: 12872 kB
-darner 131072 requests: 17960 kB
-darner 262024 requests: 16232 kB
+ubuntu@ip-10-6-51-186:~/darner$ bench/mem_rss.sh
+kestrel 0 requests: 74476 kB
+kestrel 1024 requests: 84256 kB
+kestrel 2048 requests: 109508 kB
+kestrel 4096 requests: 133492 kB
+kestrel 8192 requests: 160404 kB
+kestrel 16384 requests: 182460 kB
+kestrel 32768 requests: 278340 kB
+kestrel 65536 requests: 330300 kB
+kestrel 131072 requests: 397852 kB
+kestrel 262024 requests: 465148 kB
+kestrel 524048 requests: 520476 kB
+kestrel 1048576 requests: 611612 kB
+darner 0 requests: 2220 kB
+darner 1024 requests: 3492 kB
+darner 2048 requests: 5872 kB
+darner 4096 requests: 8136 kB
+darner 8192 requests: 15520 kB
+darner 16384 requests: 25656 kB
+darner 32768 requests: 27412 kB
+darner 65536 requests: 24324 kB
+darner 131072 requests: 28440 kB
+darner 262024 requests: 28524 kB
+darner 524048 requests: 32104 kB
+kestrel 1048576 requests: 33848 kB
```
# Queue Flooding
@@ -48,38 +52,57 @@ kestrel benchmark at 300 concurrent connections - anything higher caused connect
```
ubuntu@domU-12-31-39-0E-0C-72:~/darner$ bench/flood.sh
warming up kestrel...done.
-kestrel 1 conns: 6919.94 #/sec (mean)
-kestrel 2 conns: 9042 #/sec (mean)
-kestrel 5 conns: 9775.17 #/sec (mean)
-kestrel 10 conns: 10526.9 #/sec (mean)
-kestrel 50 conns: 11318 #/sec (mean)
-kestrel 100 conns: 11693.2 #/sec (mean)
-kestrel 200 conns: 5696.71 #/sec (mean)
-kestrel 300 conns: 3260.36 #/sec (mean)
-darner 1 conns: 10032.1 #/sec (mean)
-darner 2 conns: 16572.8 #/sec (mean)
-darner 5 conns: 21006.2 #/sec (mean)
-darner 10 conns: 22182.8 #/sec (mean)
-darner 50 conns: 24697.5 #/sec (mean)
-darner 100 conns: 23960.7 #/sec (mean)
-darner 200 conns: 24160.4 #/sec (mean)
-darner 300 conns: 23781.2 #/sec (mean)
-darner 400 conns: 21755.7 #/sec (mean)
-darner 600 conns: 22019.2 #/sec (mean)
-darner 800 conns: 20076.3 #/sec (mean)
-darner 1000 conns: 19648.3 #/sec (mean)
-memcache 1 conns: 11516.1 #/sec (mean)
-memcache 2 conns: 21879.4 #/sec (mean)
-memcache 5 conns: 27700.8 #/sec (mean)
-memcache 10 conns: 37126.4 #/sec (mean)
-memcache 50 conns: 43412.2 #/sec (mean)
-memcache 100 conns: 41126.9 #/sec (mean)
-memcache 200 conns: 38610 #/sec (mean)
-memcache 300 conns: 41347.9 #/sec (mean)
-memcache 400 conns: 40833 #/sec (mean)
-memcache 600 conns: 38299.5 #/sec (mean)
-memcache 800 conns: 37167.8 #/sec (mean)
-memcache 1000 conns: 34506.6 #/sec (mean)
+kestrel 1 conns: 7163.58 #/sec (mean)
+kestrel 2 conns: 8802.04 #/sec (mean)
+kestrel 5 conns: 9742.79 #/sec (mean)
+kestrel 10 conns: 11200.7 #/sec (mean)
+kestrel 50 conns: 12038.8 #/sec (mean)
+kestrel 100 conns: 11705.5 #/sec (mean)
+kestrel 200 conns: 11700 #/sec (mean)
+kestrel 300 conns: 11562.7 #/sec (mean)
+kestrel 400 conns: 11596.9 #/sec (mean)
+kestrel 600 conns: 11357.2 #/sec (mean)
+kestrel 800 conns: 11147 #/sec (mean)
+kestrel 1000 conns: 11218.9 #/sec (mean)
+kestrel 2000 conns: 11101.9 #/sec (mean)
+kestrel 4000 conns: 10879 #/sec (mean)
+kestrel 6000 conns: 10639.4 #/sec (mean)
+kestrel 8000 conns: 10618 #/sec (mean)
+kestrel 10000 conns: 10486.6 #/sec (mean)
+darner 1 conns: 13088.1 #/sec (mean)
+darner 2 conns: 30102.3 #/sec (mean)
+darner 5 conns: 35279.6 #/sec (mean)
+darner 10 conns: 36549.7 #/sec (mean)
+darner 50 conns: 36846 #/sec (mean)
+darner 100 conns: 36199.1 #/sec (mean)
+darner 200 conns: 35906.6 #/sec (mean)
+darner 300 conns: 35893.8 #/sec (mean)
+darner 400 conns: 36081.5 #/sec (mean)
+darner 600 conns: 36616.6 #/sec (mean)
+darner 800 conns: 34910.1 #/sec (mean)
+darner 1000 conns: 34668.1 #/sec (mean)
+darner 2000 conns: 28169 #/sec (mean)
+darner 4000 conns: 32792.3 #/sec (mean)
+darner 6000 conns: 31680.7 #/sec (mean)
+darner 8000 conns: 30726.7 #/sec (mean)
+darner 10000 conns: 30792.9 #/sec (mean)
+memcache 1 conns: 15227.7 #/sec (mean)
+memcache 2 conns: 29133.3 #/sec (mean)
+memcache 5 conns: 35155.6 #/sec (mean)
+memcache 10 conns: 46414.5 #/sec (mean)
+memcache 50 conns: 53347.6 #/sec (mean)
+memcache 100 conns: 55294.4 #/sec (mean)
+memcache 200 conns: 53447.4 #/sec (mean)
+memcache 300 conns: 53864.8 #/sec (mean)
+memcache 400 conns: 52854.1 #/sec (mean)
+memcache 600 conns: 52700.9 #/sec (mean)
+memcache 800 conns: 51546.4 #/sec (mean)
+memcache 1000 conns: 52438.4 #/sec (mean)
+memcache 2000 conns: 38255.5 #/sec (mean)
+memcache 4000 conns: 41442.2 #/sec (mean)
+memcache 6000 conns: 43224.6 #/sec (mean)
+memcache 8000 conns: 42844.9 #/sec (mean)
+memcache 10000 conns: 41347.9 #/sec (mean)
```
# Fairness
@@ -91,53 +114,53 @@ below, a flatter curve means each request is more fairly served in time.
![Fairness Benchmark](/wavii/darner/raw/master/docs/images/bench_fairness.png)
```
-ubuntu@domU-12-31-39-0E-0C-72:~/darner$ bench/fairness.sh
+ubuntu@ip-10-6-51-186:~/darner$ bench/fairness.sh
warming up kestrel...done.
kestrel stats:
Concurrency Level: 10
Gets: 0
Sets: 100000
-Time taken for tests: 215.285 seconds
+Time taken for tests: 208.706 seconds
Bytes read: 800000 bytes
-Read rate: 3.62891 Kbytes/sec
+Read rate: 3.7433 Kbytes/sec
Bytes written: 8700000 bytes
-Write rate: 39.4644 Kbytes/sec
-Requests per second: 464.501 #/sec (mean)
-Time per request: 21503.9 us (mean)
+Write rate: 40.7084 Kbytes/sec
+Requests per second: 479.143 #/sec (mean)
+Time per request: 20868.3 us (mean)
Percentage of the requests served within a certain time (us)
- 50%: 969
- 66%: 1596
- 75%: 2777
- 80%: 5644
- 90%: 34467
- 95%: 90138
- 98%: 299754
- 99%: 473703
- 100%: 1841528 (longest request)
+ 50%: 827
+ 66%: 1434
+ 75%: 1984
+ 80%: 2528
+ 90%: 8809
+ 95%: 45956
+ 98%: 163431
+ 99%: 341297
+ 100%: 3557554 (longest request)
darner stats:
Concurrency Level: 10
Gets: 0
Sets: 100000
-Time taken for tests: 20.622 seconds
+Time taken for tests: 26.26 seconds
Bytes read: 800000 bytes
-Read rate: 37.8843 Kbytes/sec
+Read rate: 29.7506 Kbytes/sec
Bytes written: 8700000 bytes
-Write rate: 411.992 Kbytes/sec
-Requests per second: 4849.19 #/sec (mean)
-Time per request: 2060.75 us (mean)
+Write rate: 323.537 Kbytes/sec
+Requests per second: 3808.07 #/sec (mean)
+Time per request: 2624.35 us (mean)
Percentage of the requests served within a certain time (us)
- 50%: 869
- 66%: 911
- 75%: 944
- 80%: 968
- 90%: 1117
- 95%: 2523
- 98%: 43498
- 99%: 43951
- 100%: 91903 (longest request)
- ```
+ 50%: 729
+ 66%: 767
+ 75%: 817
+ 80%: 885
+ 90%: 1196
+ 95%: 3507
+ 98%: 43966
+ 99%: 44476
+ 100%: 94989 (longest request)
+```
# Queue Packing
@@ -148,20 +171,22 @@ free memory. Instead it's important for the throughput to flatten out as the ba
![Queue Packing Benchmark](/wavii/darner/raw/master/docs/images/bench_queue_packing.png)
```
-ubuntu@domU-12-31-39-0E-0C-72:~/darner$ bench/packing.sh
+ubuntu@ip-10-6-51-186:~/darner$ bench/packing.sh
warming up kestrel...done.
-kestrel 0 sets: 9777.08 #/sec (mean)
-kestrel 4096 sets: 9490.37 #/sec (mean)
-kestrel 16384 sets: 9777.56 #/sec (mean)
-kestrel 65536 sets: 9478.22 #/sec (mean)
-kestrel 262144 sets: 8689.99 #/sec (mean)
-kestrel 1048576 sets: 8735.15 #/sec (mean)
-kestrel 4194304 sets: 8467.04 #/sec (mean)
-darner 0 sets: 16380 #/sec (mean)
-darner 4096 sets: 14951 #/sec (mean)
-darner 16384 sets: 12043.1 #/sec (mean)
-darner 65536 sets: 10691.8 #/sec (mean)
-darner 262144 sets: 10810.8 #/sec (mean)
-darner 1048576 sets: 11152 #/sec (mean)
-darner 4194304 sets: 10980.6 #/sec (mean)
+kestrel 0 sets: 10350.9 #/sec (mean)
+kestrel 4096 sets: 10137.9 #/sec (mean)
+kestrel 16384 sets: 10016.5 #/sec (mean)
+kestrel 65536 sets: 10073 #/sec (mean)
+kestrel 262144 sets: 9243 #/sec (mean)
+kestrel 1048576 sets: 9220.41 #/sec (mean)
+kestrel 4194304 sets: 9000.9 #/sec (mean)
+kestrel 16777216 sets: 7990.73 #/sec (mean)
+darner 0 sets: 25723.5 #/sec (mean)
+darner 4096 sets: 21853.1 #/sec (mean)
+darner 16384 sets: 17792 #/sec (mean)
+darner 65536 sets: 13606.4 #/sec (mean)
+darner 262144 sets: 13798.8 #/sec (mean)
+darner 1048576 sets: 14479.1 #/sec (mean)
+darner 4194304 sets: 13535.5 #/sec (mean)
+darner 16777216 sets: 12786.9 #/sec (mean)
```
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 7e5f5bd

Please sign in to comment.