Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Majordomo Asynch PHP example #563

Open
paufabregat opened this issue May 22, 2015 · 5 comments
Open

Performance Majordomo Asynch PHP example #563

paufabregat opened this issue May 22, 2015 · 5 comments

Comments

@paufabregat
Copy link

Hello,

I copy/pasted and ran the asynchronous mdclient example from php and I am getting a really slow performance.
I tested it out on my MacBook:
Processor 2,2 GHz Intel Core i7
Memory 16 GB 1600 MHz DDR3

and in a test server:
Processor Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
Memory 128 GB

Comparing the results (on both machines are pretty similar) from the ones on the guide:
100K messages 1 worker
3.43s user
3.25s system
40.458 total

100K messages 10 workers
3.50s user
3.38s system
17.630 total

So this is much much slower than the results from the guide. Any idea why does this happen?
Thanks in advance!

@hintjens
Copy link
Contributor

The time seems spent doing not very much... you can find out where it's
being spent by breaking down the pieces. Start by testing a far simpler
a-to-b send/recv just to make sure you have no underlying system issues.

On Fri, May 22, 2015 at 12:33 PM, Pau Fabregat Pappaterra <
notifications@github.com> wrote:

Hello,

I copy/pasted and ran the asynchronous mdclient example from php and I am
getting a really slow performance.
I tested it out on my MacBook:
Processor 2,2 GHz Intel Core i7
Memory 16 GB 1600 MHz DDR3

and in a test server:
Processor Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
Memory 128 GB

Comparing the results (on both machines are pretty similar) from the ones
on the guide:
100K messages 1 worker
3.43s user
3.25s system
40.458 total

100K messages 10 workers
3.50s user
3.38s system
17.630 total

So this is much much slower than the results from the guide. Any idea why
does this happen?
Thanks in advance!


Reply to this email directly or view it on GitHub
#563.

@paufabregat
Copy link
Author

That's what I tried out, I still don't know what can be the problem :

  • Simple Dealer-Router 100K messages 3.375 sec (this seems fine, right?)
  • Weather updates example in PHP 10M messages 16 sec (much slower than benchmark from the guide)
  • Weather updates example in C 10M messages 13 sec
  • I also tried with old releases of ZeroMQ but I get same results

I also have experimented another issue while testing the md example:
when the client sends 100K messages at once the broker just receives 80k and the rest are lost . On the other hand if I send/receive 10K 10 times everything is fine. Don't know if this is relevant but I let you know just in case.
Well any help will be appreciated! Thanks

@hintjens
Copy link
Contributor

The 10M/13 sec rate seems fair for PHP, each message is costing a fair
amount of work.

You could try the libzmq built-in performance tests to see how fast ZeroMQ
is without the language binding.

On Tue, May 26, 2015 at 11:12 AM, Pau Fabregat Pappaterra <
notifications@github.com> wrote:

That's what I tried out, I still don't know what can be the problem :

  • Simple Dealer-Router 100K messages 3.375 sec (this seems fine,
    right?)
  • Weather updates example in PHP 10M messages 16 sec (much slower
    than benchmark from the guide)
  • Weather updates example in C 10M messages 13 sec
  • I also tried with old releases of ZeroMQ but I get same results

I also have experimented another issue while testing the md example:
when the client sends 100K messages at once the broker just receives 80k
and the rest are lost . On the other hand if I send/receive 10K 10 times
everything is fine. Don't know if this is relevant but I let you know just
in case.
Well any help will be appreciated! Thanks


Reply to this email directly or view it on GitHub
#563 (comment).

@paufabregat
Copy link
Author

Latency test:
message size: 1 [B]
roundtrip count: 100000
average latency: 30.760 [us]

Throughput test:
message size: 1 [B]
message count: 100000
mean throughput: 6031726 [msg/s]
mean throughput: 48.254 [Mb/s]

These results are even a bit better than the ones from the performance webpage so I guess it is all good without the language binding...
So do you think this is just a matter of the language? I tried out the weather example in C and the results are pretty similar to the ones I got in PHP (10M/16 sec rate ) and it shouldn't be like that, right?

@hintjens
Copy link
Contributor

The weather example doesn't send all messages iirc; you can write your own
test cases to measure different patterns. The Guide examples are not meant
to be profile tools.

On Tue, May 26, 2015 at 3:04 PM, Pau Fabregat Pappaterra <
notifications@github.com> wrote:

Latency test:
message size: 1 [B]
roundtrip count: 100000
average latency: 30.760 [us]

Throughput test:
message size: 1 [B]
message count: 100000
mean throughput: 6031726 [msg/s]
mean throughput: 48.254 [Mb/s]

These results are even a bit better than the ones from the performance
webpage so I guess it is all good without the language binding...
So do you think this is just a matter of the language? I tried out the
weather example in C and the results are pretty similar to the ones I got
in PHP (10M/16 sec rate ) and it shouldn't be like that, right?


Reply to this email directly or view it on GitHub
#563 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants