Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding enable-threads #1141

Open
mafanasev opened this issue Jan 2, 2016 · 8 comments
Open

Understanding enable-threads #1141

mafanasev opened this issue Jan 2, 2016 · 8 comments

Comments

@mafanasev
Copy link

The doc states that ... Python plugin does not initialize the GIL... your app-generated threads will not run.... remember to enable them with enable-threads.
But my app-generated threads are running well without the flag. Threads are created with threading module.
Why the doc says that threads will not run at all?

It seems to me that the flag affects only uWSGI api. So i think it is safe to create and run threads within app if threads will not use uWSGI api.
Is it correct?

uwsgi 2.0.6, python 2.7.3.

@methane
Copy link
Contributor

methane commented Jan 3, 2016

What your log said?

        uwsgi_log_initial("*** Python threads support is disabled. You can enable it with --enable-threads ***\n");

or

    uwsgi_log("python threads support enabled\n");

@mafanasev
Copy link
Author

The first one.

@mafanasev
Copy link
Author

Consider example app.py:

import os

from flask import Flask
from threading import Thread, current_thread
from time import sleep

app = Flask(__name__)

def target():
    while True:
        sleep(1)
        print os.getpid(), current_thread(), 'alive'

thread = None


@app.route('/')
def index():
    global thread
    print os.getpid(), current_thread()
    if not thread:
        thread = Thread(target=target)
        thread.setDaemon(True)
        thread.start()
    sleep(10)
    return 'OK'
uwsgi --plugins python --file app.py --master --logto uwsgi.log --processes 1 --http-socket 0.0.0.0:8080 --callable app &

And uwsgi.log:

*** Starting uWSGI 2.0.6-debian (64bit) on [Tue Jan  5 15:53:02 2016] ***
compiled with version: 4.6.4 on 22 January 2015 14:10:28
os: Linux-3.2.0-68-virtual #102-Ubuntu SMP Tue Aug 12 22:14:39 UTC 2014
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 1
detected binary path: /usr/bin/uwsgi-core
your processes number limit is 15964
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address 0.0.0.0:8080 fd 4
Python version: 2.7.3 (default, Jun 22 2015, 19:44:33)  [GCC 4.6.3]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x16be040
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 145536 bytes (142 KB) for 1 cores
*** Operational MODE: single process ***
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x16be040 pid: 13780 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 13780)
spawned uWSGI worker 1 (pid: 13802, cores: 1)
13802 <_MainThread(MainThread, started 140347610949440)>
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
[pid: 13802|app: 0|req: 1/1] 127.0.0.1 () {24 vars in 361 bytes} [Tue Jan  5 15:53:06 2016] GET / => generated 2 bytes in 10011 msecs (HTTP/1.1 200) 2 headers in 78 bytes (1 switches on core 0)
13802 <_MainThread(MainThread, started 140347610949440)>
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
13802 <Thread(Thread-1, started daemon 140347533465344)> alive
[pid: 13802|app: 0|req: 2/2] 127.0.0.1 () {24 vars in 361 bytes} [Tue Jan  5 15:54:04 2016] GET / => generated 2 bytes in 10008 msecs (HTTP/1.1 200) 2 headers in 78 bytes (1 switches on core 0)
SIGINT/SIGQUIT received...killing workers...
worker 1 buried after 1 seconds
goodbye to uWSGI.

We see that without the flag a new thread was created and running well. But it stopped as soon as the main thread finished handle request. It is happened because once the main thread is acqired GIL to finish the request it then did not release it (with flag it does) before going to epoll_wait for the second request. Thus the second thread was unable to acquire GIL until the second request. And so on.

  1. The doc says correct. We definitely have to enable flag enable-threads if we are going to create and use alternative threads. Otherwise they will run at most while main thread is handling request.
  2. It is not safe. At least main thread uses uWSGI API and this also affects alternative threads as we may see in the example

@xrmx
Copy link
Collaborator

xrmx commented Jan 5, 2016

@mafanasev what do you think about adding your example / explanation to the doc so it would be clear for future readers?

@mafanasev
Copy link
Author

if you think my explanation is correct then it's good idea to add example / explanation to the doc.

@grantjenks
Copy link

I just had this same experience / question. This explanation helped a lot. I found this via Google. I couldn't find it in the docs.

@andyxning
Copy link

I have test this with uwsgi 2.0.12 and i have not see anything happened, it continues to work just fine.
any simple explanation about this flag?

@2457908933
Copy link

This explanation helped a lot.

alexmv added a commit to alexmv/zulip that referenced this issue Jan 21, 2022
Without this, uwsgi does not release the GIL before going back into
`epoll_wait` to wait for the next request.  This results in any
background threads languishing, unserviced.[1]

Practically, this results in Sentry background reporter threads timing
out when attempting to post results -- but only in situations with low
traffic, as in those significant time is spent in `epoll_wait`.  This
is seen in logs as:

    WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/

Or:

    WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/

Sentry attempts to detect this and warn, but due to startup ordering,
the warning is not printed without lazy-loading.

Enable threads, at a miniscule performance cost, in order to support
background workers like Sentry[2].

[1] unbit/uwsgi#1141 (comment)
[2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
alexmv added a commit to zulip/zulip that referenced this issue Jan 21, 2022
Without this, uwsgi does not release the GIL before going back into
`epoll_wait` to wait for the next request.  This results in any
background threads languishing, unserviced.[1]

Practically, this results in Sentry background reporter threads timing
out when attempting to post results -- but only in situations with low
traffic, as in those significant time is spent in `epoll_wait`.  This
is seen in logs as:

    WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/

Or:

    WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/

Sentry attempts to detect this and warn, but due to startup ordering,
the warning is not printed without lazy-loading.

Enable threads, at a miniscule performance cost, in order to support
background workers like Sentry[2].

[1] unbit/uwsgi#1141 (comment)
[2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
alexmv added a commit to alexmv/zulip that referenced this issue Jan 21, 2022
Without this, uwsgi does not release the GIL before going back into
`epoll_wait` to wait for the next request.  This results in any
background threads languishing, unserviced.[1]

Practically, this results in Sentry background reporter threads timing
out when attempting to post results -- but only in situations with low
traffic, as in those significant time is spent in `epoll_wait`.  This
is seen in logs as:

    WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/

Or:

    WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/

Sentry attempts to detect this and warn, but due to startup ordering,
the warning is not printed without lazy-loading.

Enable threads, at a miniscule performance cost, in order to support
background workers like Sentry[2].

[1] unbit/uwsgi#1141 (comment)
[2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
timabbott pushed a commit to zulip/zulip that referenced this issue Jan 21, 2022
Without this, uwsgi does not release the GIL before going back into
`epoll_wait` to wait for the next request.  This results in any
background threads languishing, unserviced.[1]

Practically, this results in Sentry background reporter threads timing
out when attempting to post results -- but only in situations with low
traffic, as in those significant time is spent in `epoll_wait`.  This
is seen in logs as:

    WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/

Or:

    WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/

Sentry attempts to detect this and warn, but due to startup ordering,
the warning is not printed without lazy-loading.

Enable threads, at a miniscule performance cost, in order to support
background workers like Sentry[2].

[1] unbit/uwsgi#1141 (comment)
[2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
alexmv added a commit to alexmv/zulip that referenced this issue May 13, 2022
Without this, uwsgi does not release the GIL before going back into
`epoll_wait` to wait for the next request.  This results in any
background threads languishing, unserviced.[1]

Practically, this results in Sentry background reporter threads timing
out when attempting to post results -- but only in situations with low
traffic, as in those significant time is spent in `epoll_wait`.  This
is seen in logs as:

    WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/

Or:

    WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/

Sentry attempts to detect this and warn, but due to startup ordering,
the warning is not printed without lazy-loading.

Enable threads, at a miniscule performance cost, in order to support
background workers like Sentry[2].

[1] unbit/uwsgi#1141 (comment)
[2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
timabbott pushed a commit to zulip/zulip that referenced this issue Jun 2, 2022
Without this, uwsgi does not release the GIL before going back into
`epoll_wait` to wait for the next request.  This results in any
background threads languishing, unserviced.[1]

Practically, this results in Sentry background reporter threads timing
out when attempting to post results -- but only in situations with low
traffic, as in those significant time is spent in `epoll_wait`.  This
is seen in logs as:

    WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/

Or:

    WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/

Sentry attempts to detect this and warn, but due to startup ordering,
the warning is not printed without lazy-loading.

Enable threads, at a miniscule performance cost, in order to support
background workers like Sentry[2].

[1] unbit/uwsgi#1141 (comment)
[2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants