New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiprocessing.SyncManager connection hang #60118
Comments
create.py: import multiprocessing
manager = multiprocessing.Manager()
namespace = manager.Namespace()
print("create.py complete") run.py: import create
print("run.py complete") Correct behaviour occurs for create.py: $ python3 create.py
create.py complete INCORRECT behaviour occurs for run.py: $ python3 run.py No output, because it hangs. On SIGINT: ^CTraceback (most recent call last):
File "run.py", line 1, in <module>
import create
File "[...]/create.py", line 7, in <module>
test()
File "[...]/create.py", line 5, in test
namespace = manager.Namespace()
File "/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/multiprocessing/managers.py", line 670, in temp
token, exp = self._create(typeid, *args, **kwds)
File "/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/multiprocessing/managers.py", line 568, in _create
conn = self._Client(self._address, authkey=self._authkey)
File "/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/multiprocessing/connection.py", line 175, in Client
answer_challenge(c, authkey)
File "/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/multiprocessing/connection.py", line 412, in answer_challenge
message = connection.recv_bytes(256) # reject large message
KeyboardInterrupt $ python3
Python 3.2.3 (v3.2.3:3d0686d90f55, Apr 10 2012, 11:25:50)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin This appears to be a duplicate of this *closed* bug: http://bugs.python.org/issue7474 This was closed because nobody could reproduce the behaviour on Python 3. I have reproduced it, but I don't know how to reopen that bug, so I'm filing this one. The test case in 7474 also fails for me. |
I get the same hang on Linux with Python 3.2. For Windows the documentation does warn against starting a process as a side effect of importing a process. There is no explicit warning for Unix, but I would still consider it bad form to do such things as a side effect of importing a module. It appears that it is the import of the hmac module inside deliver_challenge() that is hanging. I expect forking a process while an import is in progress may cause the import machinery (which I am not familiar with) to be in an inconsistent state. The import lock should have been reset automatically after the fork, but maybe that is not enough. Maybe the fact that the import is being done by a non-main thread is relevant. I would suggest just rewriting the code as create.py: import multiprocessing
def main():
manager = multiprocessing.Manager()
namespace = manager.Namespace()
print("create.py complete")
if __name__ == '__main__':
main() run.py: import create
create.main()
print("run.py complete") |
Here is a reproduction without using multiprocessing: create.py: import threading, os
def foo():
print("Trying import")
import sys
print("Import successful")
pid = os.fork()
if pid == 0:
try:
t = threading.Thread(target=foo)
t.start()
t.join()
finally:
os._exit(0)
os.waitpid(pid, 0)
print("create.py complete") run.py: import create
print("run.py complete") Using python2.7 and python3.3 this works as expected, but with python3.2 I get user@mint-vm /tmp $ python3 create.py
Trying import
Import successful
create.py complete
user@mint-vm /tmp $ python3 run.py
Trying import
<Hang>
^CTraceback (most recent call last):
File "run.py", line 1, in <module>
import create
File "/tmp/create.py", line 17, in <module>
os.waitpid(pid, 0)
KeyboardInterrupt |
Python 3.2 has extra code in _PyImport_ReInitLock() which means that when a fork happens as a side effect of an import, the main thread of the forked process owns the import lock. Therefore other threads in the forked process cannot import anything. _PyImport_ReInitLock(void)
{
if (import_lock != NULL)
import_lock = PyThread_allocate_lock();
if (import_lock_level > 1) {
/* Forked as a side effect of import */
long me = PyThread_get_thread_ident();
PyThread_acquire_lock(import_lock, 0);
/* XXX: can the previous line fail? */
import_lock_thread = me;
import_lock_level--;
} else {
import_lock_thread = -1;
import_lock_level = 0;
}
} I think the reason this code is not triggered in Python 3.3 is the introduction of per-module import locks. |
It looks like the problem was caused be the fix for
I think the usage this was intended to enable is evil since one of the forked processes should always be terminated with os._exit(). |
Adding Brett Cannon as this issue appears to really be about doing an import shortly after an os.fork() -- this may be of particular interest to him. This issue probably should have had Brett and/or others added to nosy long ago. |
I'm adding Nick to see if he has anything to add since he was the one that worked on the change that Richard said caused the problem. But in my opinion this is in the same realm as importing as a side-effect of spawning a thread; don't do it. |
3.2 is in security-fix only mode, so nothing's going to change there. For 3.3+, the per-module import lock design means the issue doesn't happen. However, I wonder if there may be some now dead code relating to the global import lock that could be deleted. |
Dead code deletion should be a separate issue, so I'm going to close this as fixed. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: