Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DPVS performance evaluation #54

Closed
lapnd opened this issue Nov 21, 2017 · 12 comments
Closed

DPVS performance evaluation #54

lapnd opened this issue Nov 21, 2017 · 12 comments

Comments

@lapnd
Copy link

lapnd commented Nov 21, 2017

Hi,
I would like to replicate your performance test results on our servers.
Would you mind to share more detailed in your test setup? (such as test diagram, how to setup the test, how to run the test ..)
Thank you!

@beacer
Copy link
Contributor

beacer commented Nov 22, 2017

We use HTTP clients (wrk) and servers (nginx) to test, clients <-> dpvs <-> servers.
All machines (clients/servers/dpvs) are physical machines,

  • OS: CentOS 7.2.
  • Kernel: 3.10.0-327.el7.x86_64
  • CPU: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
  • NICs: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 03)

When test, we have 5 HTTP clients running wrk and another 5 for Nginx. The kernel sysctl parameter and IRQ affinity must be optimized on clients and servers.

        location / {
                default_type text/plain;
                return 200 "hello\n";
        }

For DPVS setting, pls refer: https://github.com/iqiyi/dpvs/blob/master/doc/tutorial.md .

@lapnd
Copy link
Author

lapnd commented Nov 28, 2017

Thank Lei,
I'm able to set up the test. However, the client app wrk report Requests/sec and Transfer/sec, how did you translate them to packet per second?
And from your test report, with 07 cores, DPVS can reach to ~14Mpps with packet size = 64 bytes, it's line rate for 10G NIC! .
Is it ok to say that we can get line rate (packet size = 64 bytes) with DPVS (using 7 cores)?
Thank you!

@beacer
Copy link
Contributor

beacer commented Nov 29, 2017

We calculate pps by dpip link show command. When we test, the avg packet size is 98B instead of 64B, because we are useing wrk to test HTTP request/response,and disable "keepalive" on Nginx. When using 7 cores we get the line-rate for 10G NIC (with 98B).

@lapnd
Copy link
Author

lapnd commented Nov 29, 2017

Thank you for your explanation!

@lapnd lapnd closed this as completed Nov 29, 2017
@tiepnv-viosoft
Copy link

Hi beacer,
Could you let me know which mode did you use to do Performance Test?

Thanks,

@beacer
Copy link
Contributor

beacer commented Mar 8, 2018

@tiepnv-viosoft FNAT mode.

@tiepnv-viosoft
Copy link

Thank you for your reply,
Is it the FNAT_1arm?

@beacer
Copy link
Contributor

beacer commented Mar 9, 2018

Both one-arm and two arm are tested, the data in README.md is two-arm.

@tiepnv-viosoft
Copy link

Thank you very much for your support!

@tiepnv-viosoft
Copy link

tiepnv-viosoft commented Mar 14, 2018

Hi beacer,
As you said that:

We calculate pps by dpip link show command

Could you explain me specifically how to get pps parameter in dpvs? And is your Performance Test result is calculated based on Tx or Rx?
I also ran ipvsadm -ln --rate and dpip link -s show but they didn't show any expected output.

I look forward to receiving your help soon.
Thanks,

@ywc689
Copy link
Collaborator

ywc689 commented Mar 14, 2018

ipvsadm -ln --rate is not supported by dpvs yet. You can try something like this dpip link -s show dpdk0 i 3 -C to get PPS of specified device.

@tiepnv-viosoft
Copy link

Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants