-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
severe degradation of perfomance under stress test #28
Comments
I'm not sure about the meaning of this test. If possible, could you do some changes to how it's run? Namely
Such tests are of course interesting, but they have to be done under clean conditions to be reliable. Thanks. |
|
Ok, I've been probably unclear. The test how you do it now has a broken idea, so it can't be plausible.
So how to do it. The question to be answered is - how much time is needed to complete the operation? That means, the answer should be measured in time units, not iterations.So lets test the operation, pure. From your script, it's like for ($i = 0; $i < 4096; ++$i) { For measurements lets take ab as the most available tool. Some useful options it has are -c and -n. A good result can be achieved when setting -c to the number of cores on the test machine. -n value could be some multiple of -c, but anyway bigger or equal. Actually, having a 4 core machine, i run ab like ab -c4 -n100 http://xxx.xxx.xxx.xxx/apcu_bench.php As in your approach, you even can pack that ab cmd into the loop, if you wish. Then you'll see how the results differ in each run. ab will run the concurrent request in much more convenient way than the fork approach. Besides that ab will deliver some useful information. Forked processes approach is really a thing out of control. If one hangs a ms longer still having an open connection, but another curl is forked and waits for connection- you might have much more curl processes than you imagine. And as they all have an open connection, that will slowdown the server. Repeating that again and again means just DDoS'ing the server. So that's really a bad idea, and that's probably the reason you get it evermore slower. If you don't like ab, so take any other appropriate tool. But please do it the way I've described. Actually I'm really curious about the results you get :) Thanks. |
Closing, no feedback ... I'm not finishing the conversation, @Techmind if you are able to do as weltling suggests and find strange behavior feel free to open the bug, just trying to keep on top of things ... |
I was doing some simple testing (on project heavily using apc user cache): repeated 5 times, these are best results: PHP 5.4.17 with APC: PHP 5.5.3 (opcode cache is turned on by default?) with APCu: That is around 25% degradation? |
@kowach, What this ticket was about is the degradation over time, that's probably not what your test was about. So one can still can call it degradation, but sense it has is the comparsion of two cache variants. It were great if you could extract some synthetic test cases, so then you could file a new ticket about APC vs. opcache+APCu. |
while comparing perfomance with eaccelerator got such results:
index.php:
bash command:
The text was updated successfully, but these errors were encountered: