Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle cythonize(..., nthreads=1) for "spawn" start method #3262

Open
mbargull opened this issue Dec 6, 2019 · 1 comment · May be fixed by #3263
Open

Handle cythonize(..., nthreads=1) for "spawn" start method #3262

mbargull opened this issue Dec 6, 2019 · 1 comment · May be fixed by #3263

Comments

@mbargull
Copy link

mbargull commented Dec 6, 2019

Configuration

Observed behavior

When

  • Cython.Build.cythonize(..., nthreads=1) is called
  • from a typical setup.py script
  • that does not guard its execution with the if __name__ == '__main__' idiom
  • and multiprocessing's start method is set to "spawn",

then

  • starting a new process will run the initial code again up until the next cythonize/multiprocessing.Pool call,
  • which will then repeatedly fail with a
    RuntimeError:
            An attempt has been made to start a new process before the
            current process has finished its bootstrapping phase.
    
            This probably means that you are not using fork to start your
            child processes and you have forgotten to use the proper idiom
            in the main module:
    
                if __name__ == '__main__':
                    freeze_support()
                    ...
    
            The "freeze_support()" line can be omitted if the program
            is not going to be frozen to produce an executable.
    
  • practically ad infinitum due to result.get(99999).

Small equivalent reproducer

$ cat test.py 
import sys
import multiprocessing
if sys.argv[1] != 'guard' or __name__ == '__main__':
    multiprocessing.set_start_method(sys.argv[2], force=True)
    with multiprocessing.Pool(1) as pool:
        pool.map_async(id, [0], chunksize=1).get(float(sys.argv[3]))
$ python test.py guard fork 1
$ python test.py no-guard fork 1
$ python test.py guard spawn 1
$ python test.py no-guard spawn 1
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
    prepare(preparation_data)
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
    run_name="__mp_main__")
  File "/opt/conda/lib/python3.7/runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/opt/conda/lib/python3.7/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/conda/z/test.py", line 5, in <module>
    with multiprocessing.Pool(1) as pool:
  File "/opt/conda/lib/python3.7/multiprocessing/context.py", line 119, in Pool
    context=self.get_context())
  File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
    self._repopulate_pool()
  File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
    w.start()
  File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 112, in start
    self._popen = self._Popen(self)
  File "/opt/conda/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/opt/conda/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
    self._launch(process_obj)
  File "/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
    _check_not_importing_main()
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
    is not going to be frozen to produce an executable.''')
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
    prepare(preparation_data)
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
    run_name="__mp_main__")
  File "/opt/conda/lib/python3.7/runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/opt/conda/lib/python3.7/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/conda/z/test.py", line 5, in <module>
    with multiprocessing.Pool(1) as pool:
  File "/opt/conda/lib/python3.7/multiprocessing/context.py", line 119, in Pool
    context=self.get_context())
  File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
    self._repopulate_pool()
  File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
    w.start()
  File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 112, in start
    self._popen = self._Popen(self)
  File "/opt/conda/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/opt/conda/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
    self._launch(process_obj)
  File "/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
    _check_not_importing_main()
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
    is not going to be frozen to produce an executable.''')
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
    prepare(preparation_data)
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
    run_name="__mp_main__")
  File "/opt/conda/lib/python3.7/runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/opt/conda/lib/python3.7/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/conda/z/test.py", line 5, in <module>
    with multiprocessing.Pool(1) as pool:
  File "/opt/conda/lib/python3.7/multiprocessing/context.py", line 119, in Pool
    context=self.get_context())
  File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
    self._repopulate_pool()
  File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
    w.start()
  File "/opt/conda/lib/python3.7/multiprocessing/process.py", line 112, in start
    self._popen = self._Popen(self)
  File "/opt/conda/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/opt/conda/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
    self._launch(process_obj)
  File "/opt/conda/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
    _check_not_importing_main()
  File "/opt/conda/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
    is not going to be frozen to produce an executable.''')
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
Traceback (most recent call last):
  File "test.py", line 6, in <module>
    pool.map_async(id, [0], chunksize=1).get(float(sys.argv[3]))
  File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 653, in get
    raise TimeoutError
multiprocessing.context.TimeoutError

Possible solutions/workarounds:

  1. Disable parallel processing if multiprocessing.get_start_method() == 'spawn' and display a warning.
  2. Raise an error if nthreads and multiprocessing.get_start_method() == 'spawn'.
  3. Try launching a process and only if that not works, disable parallel processing, e.g.:
    if nthreads and multiprocessing.get_start_method() == 'spawn':
        p = multiprocessing.Process(target=sys.exit, args=(0,))
        p.start()
        p.join(1)
        if p.exitcode != 0:
            print('Some warning', file=sys.stderr)
            nthreads = 0
    
  4. Forcefully set multiprocessing.set_start_method('fork', force=True).

Caveats:
re. 1.: If a script/module properly guards execution with if __name__ == '__main__':, there wouldn't be a need to disable multiprocessing. Many (/most?) setup.py don't, though.
re. 2.: Cython already only warns and continues with serial processing if nthreads is non-zero but multiprocessing isn't available. So raising a error seems inconsequential.
re. 3.: If a script, e.g., setup.py, already does some persistent changes (e.g., adding release information to files), those would be executed again, which is most likely undesired behavior.
re. 4.: Won't work on Windows. And it might be unexpected/undesired by the user to change this.

I favor approach 1 as it is the "least breaking" change. "Least breaking" in the sense that scipts with nthreads that work with macOS and Python up 3.7 would continue to work, albeit serially instead of parallely processed.

j-luo93 added a commit to djwyen/sound-law-benchmark that referenced this issue Nov 6, 2020
samster25 added a commit to Eventual-Inc/Daft that referenced this issue Sep 22, 2022
* fixes issue when multithreaded cython build is set. the following
error appears:
```RuntimeError:
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
```
* Related to cython/cython#3262
@Tomasz-Kluczkowski
Copy link

this is still an issue with python 3.10.4 and 3.11.1 and 3.11.4 (tested as had virtual envs ready) with Mac on 13.5.1 (22G90)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants