-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to implement max_execution_time for swoole http server #3078
Comments
I'd like to set a timeout (max execution time) for workers as well. And i couldn't find a solution to achieve this either. |
If you mean to forcefully terminate a worker in case a coroutine blocks (due to an error/endless loop or a blocking IO operation) and exceeds certain amount of time you could implement a watchdog using Swoole\Timer and Swoole\Table. The idea is that with Timer you register a callback that "checks in" every X seconds in a shared memory (swoole table) and another worker (again with Timer) checks every X seconds when was the last time every other worker checked in... If it is more than X seconds (X+1) you can kill that worker. Swoole will automatically restart it. P.S. P.P.S. |
we have a watchdog for coroutine (but English documents are not synchronized) |
Indeed there is the preemptive scheduler but in my understanding using it will cause issues when accessing & modifying global or static variables (for example objects from Dependency Injection container). For example: //some code running for 10 msec (10msec was the allocated time slot, right?)
if (some_class::$DI_container->User->current_user_id === 0) {
//a coroutine switch may occur here
//and then the assignment may overwrite a valid value when the execution goes back to this coroutine
some_class::$DI_container->User->current_user_id === 1;
} So any time a variable shared between coroutines is being checked and modified locking needs to be employed. Is there any other mechanism to limit the maximum execution time of a coroutine? Maybe a call like:
And then an exception to be thrown if timeout is exceeded. And this to be valid for all coroutines. The best will be if it is allowed to be changed/increased during runtime too... |
@thomasbley @healerz please check the following example: <?php
$server = new Swoole\HTTP\Server("0.0.0.0", 9501);
$server->on("start", function (Swoole\Http\Server $server) {
echo "Swoole http server is started at http://127.0.0.1:9501\n";
});
$server->on("request", function (Swoole\Http\Request $request, Swoole\Http\Response $response) {
$max_execution_ms = 500;
$timer = Swoole\Timer::after($max_execution_ms , function() use ($response, $max_execution_ms) {
if(!$response->output) {
$response->output = 1;
$response->header("Content-Type", "text/plain");
$response->end("Timeout after {$max_execution_ms}ms\n");
}
});
// your application layer latency
co::sleep(0.3);
if(!$response->output) {
$response->output = 1;
$response->header("Content-Type", "text/plain");
$response->end("Hello World\n");
}
Swoole\Timer::clear($timer);
});
$server->start(); |
@kenashkov A timer can be used within a coroutine to limit the max execution time. Indeed all the coroutine are modifying the same global or static variable at unpredictable time point if I\O involves which may be not as expected. Under such case, at the moment we would suggest to isolate global or static variables based on the This may be improved in the future. |
@doubaokun We have done this with an $arr[$cid] already (before the getContext() was available) but now I think it is more appropriate to use the coroutine context. How exactly you mean to use Swoole\Timer to limit the execution? I imagine we could register a function with the Timer and this function to check all other coroutines for how long they are running. Each coroutine could register in its context the time when it was started or the registered function if running often can track this as well and the terminate a coroutine (not the whole worker) but how? Is there currently a way to terminate a coroutine like Coroutine::kill($cid)? There is the yield() method but this just interrupts it... Indeed a coroutine interrupted this way will no longer execute unless resume()d but it will just leak. |
@kenashkov please check my example, onRequest callback is a Coroutine with a timer to return/kill itself based on time limitation. |
I was looking on my mobile phone and didnt see your previous answer with the example. I checked it now and I still think it doesnt solve completely the problem because we have have a sub-coroutines in the main coroutine handling the request. Let say a subcoroutine reading DB, file ops, then doing some processing... My point was that there could be an endless loop in one of these and while your example solves the problem to push response to the client it will not actually terminate this subcoroutine. And if not terminated or suspended it will keep eating CPU and block the worker. As per my previous post though I was thinking we can implement a watchdog using the timer and just suspend such a coroutine and send error notifications. It will create a small leak (the stack of the coroutine) but at least will not be blocking the worker and wait until the server is restarted. But in fact this wont work either as if the worker is blocked by an endless loop the timer does not interrupt it to switch to the registered coroutine. This means that your example will not work either if the script runs for more than the $max_execution_ms because of an endless loop (in the main coroutine). Even using ini_set("swoole.enable_preemptive_scheduler","1"); does not help in the case of an endless loop as it seems this scheduler still counts on a yield point somewhere. I tested this on: |
@kenashkov the coroutine execution status should be managed by the code, return the coroutine when it is necessary, for example: <?php
Co\run(function() {
$max_execution_ms = 500;
$done = 0;
$timer = Swoole\Timer::after($max_execution_ms , function() use (&$done) {
$done = 1;
echo "timeout \n";
});
// other codes
while (1) {
if($done) {
Swoole\Timer::clear($timer);
echo "end\n";
return;
}
co::sleep(0.1);
echo "1\n";
}
Swoole\Timer::clear($timer);
echo "end\n";
});
echo "main"; |
Thanks for replying! My point was that if there is no yield point in the code like: Co\run(function() {
$max_execution_ms = 500;
$done = 0;
$timer = Swoole\Timer::after($max_execution_ms , function() use (&$done) {
$done = 1;
echo "timeout \n";
});
// other codes
//no function that yields here
while (1) {
if($done) {
Swoole\Timer::clear($timer);
echo "end\n";
return;
}
// co::sleep(0.1);// assume no yield here
echo "1\n";
}
Swoole\Timer::clear($timer);
echo "end\n";
});
echo "main"; Basically if there is a loop doing data processing (not pulling from DB, no coroutine yield) that because of an error in the break conditions becomes endless under certain initial data conditions. In this case in Swoole context there seems to be no way to interrupt this unless a watchdog is used and another worker terminates the blocked worker. In normal PHP context this will be terminated when max_execution_time is reached. The only way to have this in swoole is to explicitly include a code in the loop that yields to ensure the timer has a chance to run and do it as in your example. |
@doubaokun thanks for your reply and the example. |
@healerz Can you test your code with the setup: <?php
ini_set("swoole.enable_preemptive_scheduler","1"); |
@doubaokun
My expectation would be that the process gets terminated after 500ms, or it's at least what I'm trying to achieve to prevent everlasting processes :) |
Yes, this was my point - that there may be a an infinite loop containing no code that yelds (be it co::sleep() or any other). In this case the preemptive scheduler does not work either as it seems the internal event loop is never accessed as the executor gets stuck in the loop. I have no knowledge of the internals but I would assume that an interrupt inside the PHP VM will need to be used/implemented to interrupt the endless loop and yield back to the event loop/scheduler (I guess the current max_execution_time logic at. The only current solution is to use a watchdog and just restart the worker process and emit an error message... P.S. this issue I think is related to #2827 - I was looking how to measure how much exact time was spent in a coroutine for profiling purpose (thus I needed to know when they switch). Perhaps if no API for tracking the switching is implemented at least API for accessing how much time the coroutine is being executed could be added if the current issue (max execution time per coroutine) is considered |
If you enable preemptive scheduler, the process won't be dead and is still able to accept and process new connections, but the loops are still running. The concept is controlling the timeout, terminate the loop at the application layer: <?php
<?php
use Swoole\Http\Request;
use Swoole\Http\Response;
use Swoole\Http\Server;
use Swoole\Timer;
ini_set('swoole.enable_preemptive_scheduler', '1');
$server = new Server('0.0.0.0', 9501, SWOOLE_BASE);
$server->on('start', function (Server $server) {
echo 'Swoole http server started' . PHP_EOL;
});
$server->on('request', function (Request $request, Response $response) {
$maxEcecutionTimeMs = 500;
$die = 0;
$timer = Timer::after($maxEcecutionTimeMs, function () use ($response, $maxEcecutionTimeMs, &$die) {
if (!$die) {
$die = 1;
$response->end("Timeout after {$maxEcecutionTimeMs}ms\n");
}
});
while (true) {
// check the coroutine status and exit if $die = 1
if($die) {
return;
}
co::sleep(1);
echo "tick\n";
}
$response->end("Hi\n");
});
$server->start(); Will have a look at #2827 |
Please answer these questions before submitting your issue. Thanks!
Following the code example at https://www.swoole.co.uk/docs/modules/swoole-http-server-doc I'd like to ask if you could provide a code example enabling http workers to stop processing a request after a certain period of time (similar to max_execution_time in php-fpm)
php --ri swoole
)?swoole 4.4.15
Docker version 19.03.5
kernel 5.4.7
Thanks!
The text was updated successfully, but these errors were encountered: