Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

severe degradation of perfomance under stress test #28

Closed
Techmind opened this issue Jul 26, 2013 · 6 comments
Closed

severe degradation of perfomance under stress test #28

Techmind opened this issue Jul 26, 2013 · 6 comments

Comments

@Techmind
Copy link

while comparing perfomance with eaccelerator got such results:

index.php:

<?php
const WAIT_STEP = 3;
const DURATION = 10;

apcu_store('test', 1);

$start_time = apcu_fetch('start');
if ($start_time <= microtime(true)) {
    apcu_delete('start');
    apcu_delete('end');
    $start_time = null;
}

if (!$start_time) {
    $start_time = microtime(true) + WAIT_STEP;
    $end_time = microtime(true) + WAIT_STEP + DURATION;
    apcu_store('start', $start_time);
    apcu_store('end', $end_time);
}

sleep(1);
$start_time = apcu_fetch('start');
$end_time = apcu_fetch('end');

usleep(($start_time - microtime(true)) * 1000000);

$count = 0;
while (microtime(true) < $end_time) {
    for ($i = 0; $i < 100; $i++) {
        //$x = apcu_delete('test');
        $x = apcu_store('test', 1);
        $x = apcu_fetch('test');
        $x = apcu_delete('test');
    }
    $count++;
}
echo $count. "\n";die;

bash command:

  1. run for a while:
while true; do (echo "" > out; for i in {1..6}; do curl -s http://localhost/ & >> out; done) | awk '{s+=$1} END {print s}' >> input && tail -n 1 input && sleep 2; done
  1. you will see descending trend
3256
1873
1562
1319
1130
1081
990
871
740
564
657
562
625
537
498
527
553
559
531
422
452
440
462
424
430
442
441
413
421
420
414
398
404
392
385
377
378
334
295
311
347
331
311
329
320
327
299
251
256
269
266
243
239
247
272
273
272
@weltling
Copy link
Collaborator

I'm not sure about the meaning of this test. If possible, could you do some changes to how it's run? Namely

  • simplify the test scripts to contain the relevant code only
  • especially don't mix the extensions in the same test run if you want to compare them
  • for http tests use ab or old good tools alike
  • for http tests run the test tool from another machine
  • do not sort away the data

Such tests are of course interesting, but they have to be done under clean conditions to be reliable.

Thanks.

@Techmind
Copy link
Author

  1. meaning of the test - perfomance of operations degrades over time => cirtical bug, not acceptable for a cache.

  • simplify the test scripts to contain the relevant code only
    dropped commented data & eacelerator usage
  • especially don't mix the extensions in the same test run if you want to compare them =>
    Eaccelerator used as common point of synchronization(to start all web-threads as same time), doesn't actually tested here, rewrote to use acpu for syncronization too
  • for http tests use ab or old good tools alike
    Well actually its not an http test, i need to get responses because i sum them to get total amount of responses per estimated time, so ab could not be used.
  • for http tests run the test tool from another machine -
    Was actually done that way, i replaced it to localhost for easier replayability.
  • do not sort data away
    Yea sorry i was actually interested in top performance of each for correct comparison, rewrote original message a bit

@weltling
Copy link
Collaborator

Ok, I've been probably unclear. The test how you do it now has a broken idea, so it can't be plausible.

  • you talk about time but count iterations, you see?
  • even that small script has way too much stuff, testing sleep, microtime and other stuff isn't the intention
  • running curl by forking is a bad idea - none of interest is the time processes going up and down, connecting, disconnecting, hanging or not, etc.
  • the test IS a HTTP test, as otherwise there's no chance to reproduce the behavior of multiple threads/processes

So how to do it. The question to be answered is - how much time is needed to complete the operation? That means, the answer should be measured in time units, not iterations.So lets test the operation, pure. From your script, it's like

for ($i = 0; $i < 4096; ++$i) {
$x = apcu_store('test', 1);
$x = apcu_fetch('test');
$x = apcu_delete('test');
}

For measurements lets take ab as the most available tool. Some useful options it has are -c and -n. A good result can be achieved when setting -c to the number of cores on the test machine. -n value could be some multiple of -c, but anyway bigger or equal. Actually, having a 4 core machine, i run ab like

ab -c4 -n100 http://xxx.xxx.xxx.xxx/apcu_bench.php

As in your approach, you even can pack that ab cmd into the loop, if you wish. Then you'll see how the results differ in each run. ab will run the concurrent request in much more convenient way than the fork approach. Besides that ab will deliver some useful information. Forked processes approach is really a thing out of control. If one hangs a ms longer still having an open connection, but another curl is forked and waits for connection- you might have much more curl processes than you imagine. And as they all have an open connection, that will slowdown the server. Repeating that again and again means just DDoS'ing the server. So that's really a bad idea, and that's probably the reason you get it evermore slower. If you don't like ab, so take any other appropriate tool. But please do it the way I've described. Actually I'm really curious about the results you get :)

Thanks.

@krakjoe
Copy link
Owner

krakjoe commented Aug 6, 2013

Closing, no feedback ...

I'm not finishing the conversation, @Techmind if you are able to do as weltling suggests and find strange behavior feel free to open the bug, just trying to keep on top of things ...

@krakjoe krakjoe closed this as completed Aug 6, 2013
@kowach
Copy link

kowach commented Aug 26, 2013

I was doing some simple testing (on project heavily using apc user cache):
ab -H "Connection:close" -c 20 -n 100 http://localhost/

repeated 5 times, these are best results:

PHP 5.4.17 with APC:
Requests per second: 9.69 #/sec

PHP 5.5.3 (opcode cache is turned on by default?) with APCu:
Requests per second: 7.12 #/sec

That is around 25% degradation?

@weltling
Copy link
Collaborator

@kowach, What this ticket was about is the degradation over time, that's probably not what your test was about. So one can still can call it degradation, but sense it has is the comparsion of two cache variants.

It were great if you could extract some synthetic test cases, so then you could file a new ticket about APC vs. opcache+APCu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants