-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added graceful shutdown on SIGHUP #48
Conversation
Thanks for the pull request! I think the functions you added need some unit tests, however. Would it be possible for you to extend the test suite to include tests for them? |
Yes I'll try to provide it. I have a question concerning logging: |
I think my log issue will be solved on the "chaussette side", I will send them a pull request so the chaussette logger configuration propagates to waitress. |
Do the test I provided are good enough ? |
@@ -229,10 +229,12 @@ def close_on_finish(): | |||
response_headers.append(('Connection', 'Keep-Alive')) | |||
else: | |||
close_on_finish() | |||
self.channel.is_keepalive = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite understand the placement of this. It seems that the code will consider any HTTP/1.0 request a keepalive request as the result of this line? Should it not be moved to after line 229?
When merging this patch into the master, I get a number of test failures when running tests under any Python version. The output of Python 2.6 is provided below:
|
Note also that running the tests via tox should produce 100% test coverage when the "coverage" part is run. I doubt it will right now (can't tell until the tests are fixed). |
@@ -281,6 +290,7 @@ def handle_close(self): | |||
'Unknown exception while trying to close outbuf') | |||
self.connected = False | |||
asyncore.dispatcher.close(self) | |||
self.server.on_channel_close(self) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This indirection is probably unnecessary if on_channel_close doesn't use the channel, and all it does is call check_shutdown_gracefully. It should probably just call check_shutdown_gracefully instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, during development it was calling other things. I'll clean it.
Note also that the TaskDispatcher.shutdown method already attempts to wait 5 secs for all task threads to finish running; would the application of #56 be enough to prevent the need for this patch? |
I'll try to work on this this weekend. |
Before you spend a lot of time on improving this patch, please do look at #56 and see if there's a way to make the default shutdown "more graceful" as opposed to adding code for graceful shutdown as per this pull request. |
#56 only enables the threads death timeout (which already exists) to be configurable. But it only deals with the tasks, not the channels, so we can't be sure that all the HTTP requests are fulfilled. |
What if waitress is running in something other than the main thread... at that point sending a signal won't help, since the main thread would get it. Would be nice if you could simply stop the loop. Something like this would be useful... and would be an interface people are used to.
Right now the assumption is that waitress is always the only thing running in the process. If the ability to shutdown was added, the user could create their own signal handlers...depending on the vagaries of their current env. This way the ipc mechanism (signals) is not defined by waitress (Also it simplifies the change to waitress) |
(Neither of those modules are quite suitable for a production server. ) If this patch were merged, I could easily use it by updating the "shutdown_gracefully" flag myself, but maybe better to formalize that (see comment). |
@@ -77,6 +80,9 @@ def __init__(self, | |||
self.effective_host, self.effective_port = self.getsockname() | |||
self.server_name = self.get_server_name(self.adj.host) | |||
self.active_channels = {} | |||
self.shutdown_gracefully = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make this __shutdown_gracefully, so users don't have direct access to this mechanism, but then add a shutdown() method that sets it, so users can install their own signal handling if needed.
For me to accept this patch:
I personally haven't had a use case for this yet, running waitress behind a load balancer I simply remove it from the load balancer, once the load balancer notifies of all open requests being complete, I can shut down waitress. If you are willing to work on this @earonesty, feel free to do so by pulling this PR down, and creating a new PR based upon this. (However, @pomarec would need to sign https://github.com/Pylons/waitress/blob/master/CONTRIBUTORS.txt) or you may create a new patch based upon |
They are using waitress underneath... you may of course take that source code and use them as a starting point, if you wish. |
FYI i don't have time anymore to work on this. I'm sorry. |
@pomarec can you push your name to |
I am closing this PR as original submitter has not signed contributors.txt. |
A use case for this would be to ensure all pending requests are competed before shutdown when used with a hot reloader like hupper. |
Any new updates here or future plans to implement this? |
@unitto1 the following comment describes what needs to be done: #269 (comment) Are there future plans, sure, but I only have so much time and this is not a high priority item for me. |
@bertjwregeer clear, ty! |
No description provided.