-
-
Notifications
You must be signed in to change notification settings - Fork 11.5k
ENH: support parallel compilation of extensions #5161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
is the |
044992f
to
1dd69e7
Compare
The file says "Created: 11 January 2003"; subprocess was first shipped in On Wed, Oct 8, 2014 at 7:32 PM, Julian Taylor notifications@github.com
Nathaniel J. Smith |
numpy/distutils/misc_util.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we name most of these NPY_*? No idea to be honest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right strides and separate start with NPY, will change
the fortran compile loop has this comment:
which might mean it can't be parallelized as easily but then the loop is over |
oh nevermind its a double loop to remove the nondeterministic dictionary, I guess one can still parallelize f77 without extra dependency handling |
@juliantaylor Is this still in progress? |
hm it seems to be working well for me, just need to fix the env variable then it could be merged. I'd like to parallelize fortran too, but I don't know what the comment about ordering is about. Do you know what is meant? |
I have no idea. I'm thinking of going through the f2py tickets and maybe I'll learn something in the process. |
e078139
to
de7c59e
Compare
updated, now also supports fortran77 but also got a bit more hacky to support the argument also for the build_ext and build_clib targets, ideally we'd pass the jobs to the compiler but that might break custom compiler classes |
3e1dd2a
to
59830e6
Compare
no negative reply on the mailing list, anymore comments? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might as well put this in a tuple rather than use \
.
LGTM, although this is not something I know much about. |
numpy/distutils/ccompiler.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A for
loop might be more straightforward here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but the symmetry! :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I'm an old fuddy duddy ;) I find it strange to build a list just to consume an iterator. OTOH, it might be the new Python3 fashion.
Allow extensions using numpy.distutils to compile in parallel. By passing `--jobs=n` or `-j n` to `setup.py build` the compilation of extensions is now performed in `n` parallel processes. Additionally the environment variable NPY_NUM_BUILD_JOBS is used as the default value, if its unset the default is serial compilation. The parallelization is limited to within the files of an extension, so only numpy multiarraymodule really profits but its still a nice improvement when you have 2-4 cores. Unfortunately Cython will not profit at all as it tends to build one module per file.
59830e6
to
461bf42
Compare
thanks for looking, merging it. |
ENH: support parallel compilation of extensions
Allow extensions using numpy.distutils to compile in parallel.
By passing
--jobs=n
or-j n
tosetup.py build
the compilation ofextensions is now performed in
n
parallel processes.Additionally the environment variable NUMPY_NUM_BUILD_JOBS is used as
the default value, if its unset the default is serial compilation.
The parallelization is limited to within the files of an extension, so
only numpy multiarraymodule really profits but its still a nice
improvement when you have 2-4 cores.
Unfortunately Cython will not profit at all as it tends to build one
module per file.
Currently only CCompiler adapted, but adding Fortran and C++ should be
straightforward.