-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the request timeout for a queued request? #1492
Comments
I don't think But the question is on point: is there some timeout for queued connections? |
@tuukkamustonen, you are right max_requests is not relevant here. I just mentioned it to provide a scenario where requests can be queued. It is sort of the scenario I would face in the system that I am currently building. |
So, is there anybody who can answer this at all? I thought this is a fairly straight forward question. |
i'm not sure to understand the question...
There is no concept of timeout or such. the worker is simply restarted. Connections in the backlog or buffered in the proxy will be then accepted by the next worker available.
|
Thanks for your response!
My question is for how long are requests buffered by gunicorn before they are discarded in case all workers are busy? And is there any parameter to control this? I am not currently using a proxy server so the buffering would be done by gunicorn itself.
I need this because I want one worker to handle only one request at a time for some reason. And I think I could n't find the right parameters to force gunicorn to conform to this. So I set it to 1. For my current system this doesn't cause a performance issue. And another issue is that, unless I restart the worker, there would be some variables from the previous request persisting and interfering with the next process, causing a system crash. Anyway the question is not directly related with max_request. It is just to give you a background on the settings that I use. |
@edyirdaw if the arbiter detect that a worker has been busy more than the timeout setting it will be terminated. Generally speaking it may be better to setup your client to set a send/recv timeout at the socket level if you want to control it. In any case this client timeout should be less than the timeout set on the server side.
I think it should be simpler and more efficient to have a hook that clean your state. Like the post_process hook. You can also do it when the request start. |
I think we are talking about two different time-outs?
Good suggestion. |
Well very well! Thanks, am fully satisfied by your answer. P.S. I find gunicorn much easier to handle than Apache. Thanks for the amazing work that you did :), Gunicorn rocks! |
@edyirdaw thanks! I will close the issue then. Feel free to open any other discussion if you need. |
Posting this for the benefit of others who are trying to find the answer to this question and stumble across this issue. Since Gunicorn does not implement a connection queue, it's determined by the OS through the backlog parameter to the listen syscall. This is configurable in Gunicorn and the default appears to be 2048: http://docs.gunicorn.org/en/stable/settings.html#backlog |
Hello everyone, For this case, considering the same scenario. If I'd like to "expire a request" after some time in queue, how could I do? Is there a way on server config? For example, considering a new request arriving to server, and after X seconds, this request must be discarded. Thank you in advance. |
@benoitc: Could you please shed some light on @IvanAguiar post? P.S. Sorry for tagging you 😇, can't seem to find an answer to this. |
Lets say we have one process created by gunicorn and we send two requests one after the other. If we have max_requests = 1, then one of the requests would wait till the other finishes. I couldn't find in the documentation pages what is the time limit the second request would have to wait before timing out and whether we can set this value ourselves. Thanks.
The text was updated successfully, but these errors were encountered: