New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarks? vs. Nginx, Boost, etc. #522
Comments
I've recently done some benchmarks comparing Seastar's httpd example vs. boost.beast example vs. naked epoll with busy loop. HTTP benchmark Hetzner Host CX31 / 80 GB / nbg1-dc3 2VCPU
|
Hi, I'm CC'ing this to the Seastar mailing list. Let's have such
discussions on the mailing list, not in the bug tracker.
On Fri, Nov 2, 2018 at 4:18 PM Max ***@***.***> wrote:
I've recently done some benchmarks comparing Seastar's httpd example vs.
boost.beast example vs. naked epoll with busy loop.
Here's the results:
Thanks. I want to remind readers that doing a super-efficient
fixed-response HTTP server is fairly easy, and all implementations of it
which are based on epoll and an event loop should be similarly efficient.
We shouldn't expect to see Seastar outshine a well-written server in these
cases, and indeed according to your benchmarks below, it didn't.
Where Seastar would shine should be the following cases:
1. Machines with many CPUs: Even though Seastar uses the Linux kernel
(unless you use DPDK, let's ignore that for now because you did), a lot of
effort went into making Seastar use only the most scalable features of the
Linux kernel, and avoid various features which may seem scalable, but
really aren't. Moreover, Seastar encourages, if not forces, the user to
write code in a certain way (share-nothing (a.k.a. "sharded") and no atomic
operations or locks) which is especially important for performance on many
CPUs.
2. Servers which involve both network and disk I/O: non-blocking or
asynchronous disk I/O is especially tricky in Linux and hard to get right,
and a lot of implementations take shortcuts - like using helper threads for
doing disk I/O - and pay in performance. In Seastar we spent a lot of
effort on integrating both network and disk I/O into one unified
(future-based) asynchornous API, and making sure it is as quick as possible
and never blocks (the last one was particularly hard, given bugs in the
existing Linux file-system implementations).
3. Complex servers, where you can't just send a pre-canned fixed response
for every request, but need to build the response based on various other
requests, information from disk, and many other steps. At which point, a
simple epoll loop (which Seastar also has internally - it is called the
"reactor") is simply no longer enough to write a complex server without
going crazy (I know, I tried in the past :-)), and you need better
abstractions to handle the asynchronous operations, and Seastar gives you
one - the "future" which represents any asynchronous operation - be it a
network operation, disk operation, or a complex operation involving 100
smaller steps.
4. Low-latency requirements. A lot of effort went into Seastar features
which support consistently low latency (a.k.a. low tail latency, or low
99th-percentile latency), including (among other things) a CPU and disk
scheduler. Many other frameworks have no such features, so as soon as you
need to add some background work (for example) in parallel to your HTTP
serving, you start to suffer from high tail latencies - and you find that
the framework doesn't have the features you need to solve these problems.
… HTTP benchmark Hetzner
Host CX31 / 80 GB / nbg1-dc3 2VCPU
Wrk CX21 / 40 GB / nbg1-dc3 2VCPU
- *sudo ./seastar_http*
Running 3m test @ http://xxx.xxx.xxx.xxx:8080/
2 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.74ms 1.55ms 43.08ms 94.39%
Req/Sec 30.11k 4.28k 44.96k 67.89%
Latency Distribution
50% 1.43ms
75% 1.91ms
90% 2.63ms
99% 7.97ms
10784397 requests in 3.00m, 0.98GB read
Requests/sec: 59890.36
Transfer/sec: 5.60MB
- *sudo ./seastar_http --poll-mode*
Running 3m test @ http://xxx.xxx.xxx.xxx:8080/
2 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.48ms 1.17ms 210.27ms 94.01%
Req/Sec 33.43k 4.61k 48.23k 67.12%
Latency Distribution
50% 1.30ms
75% 1.67ms
90% 2.20ms
99% 5.45ms
11976634 requests in 3.00m, 1.09GB read
Requests/sec: 66501.23
Transfer/sec: 6.22MB
- *sudo ./seastar_http --poll-mode --lock-memory 1*
Running 10m test @ http://xxx.xxx.xxx.xxx:8080/
2 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 6.31ms 2.55ms 229.40ms 78.30%
Req/Sec 37.45k 4.43k 65.02k 69.10%
Latency Distribution
50% 6.01ms
75% 7.48ms
90% 9.15ms
99% 13.42ms
44712311 requests in 10.00m, 4.08GB read
Requests/sec: 74509.51
Transfer/sec: 6.96MB
- *./boost_beast_async 0.0.0.0 8080 . 2*
Running 10m test @ http://xxx.xxx.xxx.xxx:8080/
2 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.03ms 2.85ms 228.01ms 80.90%
Req/Sec 34.02k 3.80k 49.70k 68.33%
Latency Distribution
50% 6.66ms
75% 8.22ms
90% 10.02ms
99% 15.36ms
40610084 requests in 10.00m, 3.71GB read
Requests/sec: 67673.00
Transfer/sec: 6.32MB
- *./epoll (2 cores)*
Running 10m test @ http://xxx.xxx.xxx.xxx:8080/
2 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.99ms 2.50ms 225.14ms 76.94%
Req/Sec 38.83k 5.42k 59.14k 68.34%
Latency Distribution
50% 5.68ms
75% 7.18ms
90% 8.91ms
99% 13.22ms
46347215 requests in 10.00m, 4.01GB read
Requests/sec: 77239.14
Transfer/sec: 6.85MB
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#522 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAjqIyB8Dj2SMyAZfq_vysZuTdyYpa4-ks5urFRMgaJpZM4YAh3S>
.
|
@DePizzottri would you be willing to share the sources of that benchmark? |
Not a bug, closing. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Are there any benchmarks comparing Seastar and other popular alternatives for creating servers with C++? Nginx modules, Boost.Asio are the fastest and most modular options out there.
The text was updated successfully, but these errors were encountered: