New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiprocessing deadlock on Mac OS X when queue collected before process terminates #51449
Comments
This code: import multiprocessing
import queue
def _process_worker(q):
while True:
try:
something = q.get(block=True, timeout=0.1)
except queue.Empty:
return
else:
pass
# print('Grabbed item from queue:', something)
def _make_some_processes(q):
processes = []
for _ in range(10):
p = multiprocessing.Process(target=_process_worker, args=(q,))
p.start()
processes.append(p)
return processes
#p = []
def _do(i):
print('Run:', i)
q = multiprocessing.Queue()
# p.append(q)
print('Created queue')
for j in range(30):
q.put(i*30+j)
processes = _make_some_processes(q)
print('Created processes')
while not q.empty():
pass
print('Q is empty')
for i in range(100):
_do(i) Produces this output on Mac OS X (it produces the expected output on Run: 0 Changing the code as follows: + p = [] while not q.empty():
pass
print('Q is empty') fixes the deadlock. So it looks like if a multiprocessing.Queue is |
Queue uses multiprocessing.util.Finalize, which uses weakrefs to track when the object is out of scope, so this is actually expected behavior. IMHO it is not a very good approach, but changing the API to use explicit close methods is a little late at this point, I guess. |
OK, working as intended. |
This issue was marked as "not a bug" by OP a while back but for whatever reason it did not also get marked as "closed". Going ahead with closing it now. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: