Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trouble with gunicorn BaseApplication and threading.Thread #2910

Closed
kmille opened this issue Dec 28, 2022 · 1 comment
Closed

Trouble with gunicorn BaseApplication and threading.Thread #2910

kmille opened this issue Dec 28, 2022 · 1 comment

Comments

@kmille
Copy link

kmille commented Dec 28, 2022

Hey,
I have some threading/multiprocessing issues if I run gunicorn from a python file using BaseApplication. I extended your example application to somehow reproduce the issue.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# An example of a standalone application using the internal API of Gunicorn.
#
#   $ python standalone_app.py
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.

import multiprocessing

import gunicorn.app.base

from queue import Queue
import threading
import atexit

q = Queue()
workers = []


@atexit.register
def stop_worker():
    for worker in workers:
        print("Trying to stop a worker")
        q.put_nowait(False)
        print(f"queue size: {q.qsize()}")
    for worker in workers:
        worker.join()
    print("All workers are finished")


class Worker(threading.Thread):

    def __init__(self, i, q):
        super().__init__()
        self.i = i
        self.q = q

    def run(self):
        while True:
            print(f"Worker {self.i} started")
            job = self.q.get()
            if not job:
                print(f"Worker {i} is done")
                return
            print(f"Got job {job}")


for i in range(3):
    w = Worker(i, q)
    w.start()
    workers.append(w)


def number_of_workers():
    #return (multiprocessing.cpu_count() * 2) + 1
    return 1


def handler_app(environ, start_response):
    response_body = b'Works fine'
    status = '200 OK'

    response_headers = [
        ('Content-Type', 'text/plain'),
    ]

    start_response(status, response_headers)

    return [response_body]


class StandaloneApplication(gunicorn.app.base.BaseApplication):

    def __init__(self, app, options=None):
        self.options = options or {}
        self.application = app
        super().__init__()

    def load_config(self):
        config = {key: value for key, value in self.options.items()
                  if key in self.cfg.settings and value is not None}
        for key, value in config.items():
            self.cfg.set(key.lower(), value)

    def load(self):
        return self.application


if __name__ == '__main__':
    options = {
        'bind': '%s:%s' % ('127.0.0.1', '8080'),
        'workers': number_of_workers(),
    }
    StandaloneApplication(handler_app, options).run()
(venv) kmille@linbox:gunicorn python examples/standalone_app.py
Worker 0 started
Worker 1 started
Worker 2 started
[2022-12-28 23:20:24 +0100] [304121] [INFO] Starting gunicorn 20.1.0
[2022-12-28 23:20:24 +0100] [304121] [INFO] Listening at: http://127.0.0.1:8080 (304121)
[2022-12-28 23:20:24 +0100] [304121] [INFO] Using worker: sync
[2022-12-28 23:20:24 +0100] [304125] [INFO] Booting worker with pid: 304125
^C[2022-12-28 23:20:25 +0100] [304121] [INFO] Handling signal: int
[2022-12-28 23:20:25 +0100] [304125] [INFO] Worker exiting (pid: 304125)
Trying to stop a worker
queue size: 1
Trying to stop a worker
queue size: 2
Trying to stop a worker
queue size: 3
All workers are finished
[2022-12-28 23:20:25 +0100] [304121] [INFO] Shutting down: Master
^C^C^C^C^C

What I expected to see: the application terminates clean. But it hangs. In the "real" application, I have a similar setup: a flask backend puts something in the queue (queue.size() increases) but the workers doing the queue.get() never get something back. Also, another strange behaviour I don't understand: if I increase the number of workers, stop_worker will be run multiple times (All workers are finished is printed multiple times). I think I need more knowledge of the inner workings of gunicorn to solve this. Would be nice if you can help me.

UPDATE:
to be clearer: the put into the queue works, but not the get. A "Worker x is done" was not printed.

Thank you!

@kmille
Copy link
Author

kmille commented Mar 21, 2023

Fixed it by

  1. using super().__init__(daemon=True) instead of super().__init__()
  2. using from multiprocessing import Queue instead of from queue import Queue

@kmille kmille closed this as completed Mar 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant