Skip to content
benchmarks for implementation of servers which support 1 million connections
Branch: master
Clone or download
Latest commit 06c065e Mar 1, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
10_io_intensive_epoll_server add io-intensive server by goroutine Feb 21, 2019
11_io_intensive_goroutine add cpu bound goroutine server Feb 21, 2019
12_cpu_intensive_epoll_server add cpu bound goroutine server Feb 21, 2019
13_cpu_intensive_goroutine
1_simple_tcp_server add simple server Feb 15, 2019
2_epoll_server close conn when errors ocurr Feb 20, 2019
3_epoll_server_throughputs close conn when errors ocurr Feb 20, 2019
4_epoll_client continue to fix bugs Feb 20, 2019
5_multiple_client continue to fix bugs Feb 20, 2019
6_multiple_server close conn when errors ocurr Feb 20, 2019
7_server_prefork close conn when errors ocurr Feb 20, 2019
8_server_workerpool close conn when errors ocurr Feb 20, 2019
9_few_clients_high_throughputs add cpu bound epoll server Feb 21, 2019
2setup.sh
README.md Update README.md Mar 1, 2019
client.go
destroy.sh
pprof_goroutine.sh
pprof_heap.sh
report.sh add goroutine-per-connection for throughputs test Feb 20, 2019
setup.sh
setupm.sh continue to fix bugs Feb 20, 2019

README.md

Benchmark for implementation of servers that support 1m connections

inspired by handling 1M websockets connections in Go

Servers

  1. 1_simple_tcp_server: a 1m-connections server implemented based on goroutines per connection
  2. 2_epoll_server: a 1m-connections server implemented based on epoll
  3. 3_epoll_server_throughputs: add throughputs and latency test for 2_epoll_server
  4. 4_epoll_client: implement the client based on epoll
  5. 5_multiple_client: use multiple epoll to manage connections in client
  6. 6_multiple_server: use multiple epoll to manage connections in server
  7. 7_server_prefork: use prefork style of apache to implement server
  8. 8_server_workerpool: use Reactor pattern to implement multiple event loops
  9. 9_few_clients_high_throughputs: a simple goroutines per connection server for test throughtputs and latency
  10. 10_io_intensive_epoll_server: an io-bound multiple epoll server
  11. 11_io_intensive_goroutine: an io-bound goroutines per connection server
  12. 12_cpu_intensive_epoll_server: a cpu-bound multiple epoll server
  13. 13_cpu_intensive_goroutine: an cpu-bound goroutines per connection server

Test Environment

  • two E5-2630 V4 cpus, total 20 cores, 40 logicial cores.
  • 32G memory

tune the linux:

sysctl -w fs.file-max=2000500
sysctl -w fs.nr_open=2000500
sysctl -w net.nf_conntrack_max=2000500
ulimit -n 2000500

sysctl -w net.ipv4.tcp_tw_recycle=1
sysctl -w net.ipv4.tcp_tw_reuse=1

client sends the next request only when it has received the response. it has not used the pipeline style to test.

Benchmarks

1m connections

throughputs (tps) latency
goroutine-per-conn 202830 4.9s
single epoll(both server and client) 42495 23s
single epoll server 42402 0.8s
multiple epoll server 197814 0.9s
prefork 444415 1.5s
workerpool 190022 0.3s

中文介绍:

  1. 百万 Go TCP 连接的思考: epoll方式减少资源占用
  2. 百万 Go TCP 连接的思考2: 百万连接的服务器的性能
  3. 百万 Go TCP 连接的思考3: 低连接场景下的服务器的吞吐和延迟
You can’t perform that action at this time.