-
-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempt to run Python Ops with numba #327
Labels
Comments
Turns out the parallel code also works in a cfunc (I said it might not previously) import numba
import ctypes
import numpy as np
@numba.jit(parallel=True, fastmath=True)
def run_in_parallel(x):
for i in numba.prange(len(x)):
x[i] = np.exp(x[i])
signature = numba.void(
numba.types.int64,
numba.types.CPointer(numba.types.float64),
)
@numba.cfunc(signature)
def wrapper(n, data):
x = numba.carray(data, (n,))
run_in_parallel(x)
wrapper.compile()
# Print llvm code
#print(wrapper.inspect_llvm())
print("Raw pointer", wrapper.address)
x = np.random.randn(100000)
# Call the raw pointer through ctypes
wrapper.ctypes(ctypes.c_int64(len(x)), x.ctypes.data_as(wrapper.ctypes.argtypes[1])) |
Very cool! |
I'm getting the following error for that example: TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Invalid use of type(CPUDispatcher(<function run_in_parallel at 0x7fe17845acb0>)) with parameters (array(float64, 1d, C))
During: resolving callee type: type(CPUDispatcher(<function run_in_parallel at 0x7fe17845acb0>))
During: typing of call at <ipython-input-9-ff470fe7f58d> (16)
File "<ipython-input-9-ff470fe7f58d>", line 16:
def wrapper(n, data):
<source elided>
x = numba.carray(data, (n,))
run_in_parallel(x)
^ |
I forgot an import of numpy (fixed above). |
Ha, yeah, that error message is extremely misleading! |
@aseyboldt wrote a numba linker: https://nbviewer.jupyter.org/gist/aseyboldt/fb673e17ea5aca7a75d80f2211d0cf8a |
Closed
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
After an interesting discussion with @brandonwillard he mentioned how our Python implementations of
Op
s that happen in the.perform()
method could be attempted to be run withnumba.jit
as a way to auto-compile for added speed. If this compilation fails, we can always fall back to the regular Python implementation. This would probably be done in themake_thunk
code.Even better would be if we then provided c-level access to the numba-compiled function so that it interplays nicely with our other
COp
s. @aseyboldt mentioned that getting a c-pointer to a numba-compiled function should be possible. A quick google turned this up: https://numba.pydata.org/numba-doc/dev/user/cfunc.html and this: http://numba.pydata.org/numba-doc/0.8/interface_c.html#using-numba-functions-in-external-codeRelated to pymc-devs/pytensor#312 which argues for using Cython instead of Numba for a similar idea.
The text was updated successfully, but these errors were encountered: