Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the request timeout for a queued request? #1492

Closed
edyirdaw opened this issue Mar 30, 2017 · 13 comments
Closed

What is the request timeout for a queued request? #1492

edyirdaw opened this issue Mar 30, 2017 · 13 comments
Labels
Projects

Comments

@edyirdaw
Copy link

Lets say we have one process created by gunicorn and we send two requests one after the other. If we have max_requests = 1, then one of the requests would wait till the other finishes. I couldn't find in the documentation pages what is the time limit the second request would have to wait before timing out and whether we can set this value ourselves. Thanks.

@tuukkamustonen
Copy link

I don't think max_requests is of relevance here. Worker is just restarted after that many requests.

But the question is on point: is there some timeout for queued connections?

@edyirdaw
Copy link
Author

edyirdaw commented Apr 3, 2017

@tuukkamustonen, you are right max_requests is not relevant here. I just mentioned it to provide a scenario where requests can be queued. It is sort of the scenario I would face in the system that I am currently building.

@edyirdaw
Copy link
Author

So, is there anybody who can answer this at all? I thought this is a fairly straight forward question.

@benoitc
Copy link
Owner

benoitc commented Apr 17, 2017

i'm not sure to understand the question...

max_requests is used to forcefully restart a worker after N requests:
http://docs.gunicorn.org/en/stable/settings.html#max-requests

There is no concept of timeout or such. the worker is simply restarted. Connections in the backlog or buffered in the proxy will be then accepted by the next worker available.

note: setting it to 1 seems silly since it means you will spawn a new worker on each requests. why do you need to set this anyway?

@benoitc benoitc added this to Answered, waiting in Forum Apr 17, 2017
@edyirdaw
Copy link
Author

Thanks for your response!

Connections in the backlog or buffered in the proxy will be then accepted by the next worker available.

My question is for how long are requests buffered by gunicorn before they are discarded in case all workers are busy? And is there any parameter to control this? I am not currently using a proxy server so the buffering would be done by gunicorn itself.

note: setting it to 1 seems silly since it means you will spawn a new worker on each requests. why do you need to set this anyway?

I need this because I want one worker to handle only one request at a time for some reason. And I think I could n't find the right parameters to force gunicorn to conform to this. So I set it to 1. For my current system this doesn't cause a performance issue. And another issue is that, unless I restart the worker, there would be some variables from the previous request persisting and interfering with the next process, causing a system crash.

Anyway the question is not directly related with max_request. It is just to give you a background on the settings that I use.

@benoitc
Copy link
Owner

benoitc commented Apr 17, 2017

@edyirdaw if the arbiter detect that a worker has been busy more than the timeout setting it will be terminated. Generally speaking it may be better to setup your client to set a send/recv timeout at the socket level if you want to control it. In any case this client timeout should be less than the timeout set on the server side.

I need this because I want one worker to handle only one request at a time for some reason.

I think it should be simpler and more efficient to have a hook that clean your state. Like the post_process hook. You can also do it when the request start.

@edyirdaw
Copy link
Author

@edyirdaw if the arbiter detect that a worker has been busy more than the timeout setting they will be terminated. Generally speaking it may be better to setup your client to set a send/recv timeout at the socket level if you want to control it. In any case this client timeout should be less than the timeout set on the server side.

I think we are talking about two different time-outs?
OK, let me put my question again in more direct way: let us say a request comes to gunicorn. But all workers are busy. How long will the request be waiting before it is discarded by gunicorn in case all wokers become busy for very long time? And is there a parameter to set this number?

I think it should be simpler and more efficient to have a hook that clean your state. Like the post_process hook. You can also do it when the request start.

Good suggestion.

@benoitc
Copy link
Owner

benoitc commented Apr 17, 2017

@edyirdaw workers are never busy more than the time set via the timeout settings. There is no connection queue in Gunicorn. Client connections will stay in the socket backlog in your case until the socket connection timeout or it is accepted by a worker.

@edyirdaw
Copy link
Author

Client connections will stay in the socket backlog in your case until the socket connection timeout

Well very well! Thanks, am fully satisfied by your answer.

P.S. I find gunicorn much easier to handle than Apache. Thanks for the amazing work that you did :), Gunicorn rocks!

@benoitc
Copy link
Owner

benoitc commented Apr 17, 2017

@edyirdaw thanks!

I will close the issue then. Feel free to open any other discussion if you need.

@benoitc benoitc closed this as completed Apr 17, 2017
@eloff
Copy link

eloff commented Jan 9, 2020

Posting this for the benefit of others who are trying to find the answer to this question and stumble across this issue.

Since Gunicorn does not implement a connection queue, it's determined by the OS through the backlog parameter to the listen syscall. This is configurable in Gunicorn and the default appears to be 2048: http://docs.gunicorn.org/en/stable/settings.html#backlog

@IvanAguiar
Copy link

Hello everyone,

For this case, considering the same scenario. If I'd like to "expire a request" after some time in queue, how could I do? Is there a way on server config? For example, considering a new request arriving to server, and after X seconds, this request must be discarded.

Thank you in advance.

@raqibhayder
Copy link

@benoitc: Could you please shed some light on @IvanAguiar post?

P.S. Sorry for tagging you 😇, can't seem to find an answer to this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
Forum
Answered, waiting
Development

No branches or pull requests

6 participants