-
Notifications
You must be signed in to change notification settings - Fork 691
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Understanding enable-threads #1141
Comments
What your log said?
or
|
The first one. |
Consider example app.py:
And uwsgi.log:
We see that without the flag a new thread was created and running well. But it stopped as soon as the main thread finished handle request. It is happened because once the main thread is acqired GIL to finish the request it then did not release it (with flag it does) before going to
|
@mafanasev what do you think about adding your example / explanation to the doc so it would be clear for future readers? |
if you think my explanation is correct then it's good idea to add example / explanation to the doc. |
I just had this same experience / question. This explanation helped a lot. I found this via Google. I couldn't find it in the docs. |
I have test this with uwsgi 2.0.12 and i have not see anything happened, it continues to work just fine. |
This explanation helped a lot. |
Without this, uwsgi does not release the GIL before going back into `epoll_wait` to wait for the next request. This results in any background threads languishing, unserviced.[1] Practically, this results in Sentry background reporter threads timing out when attempting to post results -- but only in situations with low traffic, as in those significant time is spent in `epoll_wait`. This is seen in logs as: WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/ Or: WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/ Sentry attempts to detect this and warn, but due to startup ordering, the warning is not printed without lazy-loading. Enable threads, at a miniscule performance cost, in order to support background workers like Sentry[2]. [1] unbit/uwsgi#1141 (comment) [2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
Without this, uwsgi does not release the GIL before going back into `epoll_wait` to wait for the next request. This results in any background threads languishing, unserviced.[1] Practically, this results in Sentry background reporter threads timing out when attempting to post results -- but only in situations with low traffic, as in those significant time is spent in `epoll_wait`. This is seen in logs as: WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/ Or: WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/ Sentry attempts to detect this and warn, but due to startup ordering, the warning is not printed without lazy-loading. Enable threads, at a miniscule performance cost, in order to support background workers like Sentry[2]. [1] unbit/uwsgi#1141 (comment) [2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
Without this, uwsgi does not release the GIL before going back into `epoll_wait` to wait for the next request. This results in any background threads languishing, unserviced.[1] Practically, this results in Sentry background reporter threads timing out when attempting to post results -- but only in situations with low traffic, as in those significant time is spent in `epoll_wait`. This is seen in logs as: WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/ Or: WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/ Sentry attempts to detect this and warn, but due to startup ordering, the warning is not printed without lazy-loading. Enable threads, at a miniscule performance cost, in order to support background workers like Sentry[2]. [1] unbit/uwsgi#1141 (comment) [2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
Without this, uwsgi does not release the GIL before going back into `epoll_wait` to wait for the next request. This results in any background threads languishing, unserviced.[1] Practically, this results in Sentry background reporter threads timing out when attempting to post results -- but only in situations with low traffic, as in those significant time is spent in `epoll_wait`. This is seen in logs as: WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/ Or: WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/ Sentry attempts to detect this and warn, but due to startup ordering, the warning is not printed without lazy-loading. Enable threads, at a miniscule performance cost, in order to support background workers like Sentry[2]. [1] unbit/uwsgi#1141 (comment) [2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
Without this, uwsgi does not release the GIL before going back into `epoll_wait` to wait for the next request. This results in any background threads languishing, unserviced.[1] Practically, this results in Sentry background reporter threads timing out when attempting to post results -- but only in situations with low traffic, as in those significant time is spent in `epoll_wait`. This is seen in logs as: WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/ Or: WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/ Sentry attempts to detect this and warn, but due to startup ordering, the warning is not printed without lazy-loading. Enable threads, at a miniscule performance cost, in order to support background workers like Sentry[2]. [1] unbit/uwsgi#1141 (comment) [2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
Without this, uwsgi does not release the GIL before going back into `epoll_wait` to wait for the next request. This results in any background threads languishing, unserviced.[1] Practically, this results in Sentry background reporter threads timing out when attempting to post results -- but only in situations with low traffic, as in those significant time is spent in `epoll_wait`. This is seen in logs as: WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/ Or: WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/ Sentry attempts to detect this and warn, but due to startup ordering, the warning is not printed without lazy-loading. Enable threads, at a miniscule performance cost, in order to support background workers like Sentry[2]. [1] unbit/uwsgi#1141 (comment) [2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
The doc states that
... Python plugin does not initialize the GIL... your app-generated threads will not run.... remember to enable them with enable-threads
.But my app-generated threads are running well without the flag. Threads are created with
threading
module.Why the doc says that threads will not run at all?
It seems to me that the flag affects only uWSGI api. So i think it is safe to create and run threads within app if threads will not use uWSGI api.
Is it correct?
uwsgi 2.0.6, python 2.7.3.
The text was updated successfully, but these errors were encountered: