Skip to content
Tzach Livyatan edited this page Feb 19, 2015 · 10 revisions

The following describe the details of the HTTPD benchmark making it reproducible. Let us know if you find anything is missing.

Test bed:

  • Server 1: HTTPD
  • Server 2: Client

Software

Server 1 (HTTPD) Setup:

  1. Fetch dpdk from upstream (support for i40e is not sufficient in 1.8.0)
  2. update config/common_linuxapp
  3. update CONFIG_RTE_MBUF_REFCNT to 'n'
  4. update CONFIG_RTE_MAX_MEMSEG=4096
  5. follow instructions from Seastar readme on DPDK installation for 1.8.0
  • hugepages define 2048,2048 pages
  • compile seastar
  1. sudo build/release/apps/httpd/httpd --network-stack native --dpdk-pmd --dhcp 0 --host-ipv4-addr $seastar_ip --netmask-ipv4-addr 255.255.255.0 --collectd 0 --smp

Server 2 (http_client) Setup

  1. Steps 1-5 listed in Server 1

  2. tcpu=$(($cpu+2))

  3. sudo build/release/apps/seawreck/seawreck --server $seastar_ip:10000 --host-ipv4-addr $tester_ip --dhcp 0 --netmask-ipv4-addr 255.255.255.0 --network-stack native --dpdk-pmd --collectd 0 --duration 60 --smp --conn *64

Hardware

2 servers connected B2B with 40Gb

Complete info on HW (some of this is not intersting the disks / nvme are irrelevant to our tests)

Benchmark consists of two identical PCSD server systems:

2x Xeon E5-2695v3: 2.3GHz base, 35M cache, 14 core -> 28 core with HT -> 56 cores per host

  • 8x 8GB DDR4 Micron memory

  • 12x 300GB Intel S3500 SSD (in RAID5, 3TB of storage for OS)

  • 2x 400GB Intel NVMe P3700 SSD (not mapped in OS, just in case additional storage is needed)

  • 2x Intel Ethernet CNA XL710-QDA1 (two cards per server, cards are separated by CPUs (card1 -> CPU1, card2 -> CPU2))

  • FW info: Default BIOS settings (TurboBoost enabled, HyperThreading enabled)

  • OS info: Fedora Server 21, update with the latest updates

$ uname –a 
$ Linux dpdk1 3.17.8-300.fc21.x86_64 #1 SMP Thu Jan 8 23:32:49 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Clone this wiki locally