New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA: Add support for the jit_module function #6515
Conversation
A question related to this PR: how would others feel about the diff --git a/numba/cuda/decorators.py b/numba/cuda/decorators.py
index 9923bf08c..267f02951 100644
--- a/numba/cuda/decorators.py
+++ b/numba/cuda/decorators.py
@@ -163,7 +163,7 @@ def convert_types(restype, argtypes):
return restype, argtypes
-def jit_module(**kwargs):
+def jit_module(module=None, **kwargs):
""" Automatically ``jit``-wraps functions defined in a Python module. By
default, wrapped functions are treated as device functions rather than
kernels - pass ``device=False`` to treat functions as kernels.
@@ -185,9 +185,10 @@ def jit_module(**kwargs):
if 'device' not in kwargs:
kwargs['device'] = True
- # Get the module jit_module is being called from
- frame = inspect.stack()[1]
- module = inspect.getmodule(frame[0])
+ if module is None:
+ # Get the module jit_module is being called from
+ frame = inspect.stack()[1]
+ module = inspect.getmodule(frame[0])
# Replace functions in module with jit-wrapped versions
for name, obj in module.__dict__.items():
if inspect.isfunction(obj) and inspect.getmodule(obj) == module: With this, one could do: from numba import cuda
import user_functions
cuda.jit_module(user_functions)
@cuda.jit
def wrapper(x):
i = cuda.grid(1)
if i < len(x):
x = user_functions.computation(x) where import math
def computation(x):
# Some arbitrary mathematical operation
return math.cos(x) + x ** x Here the user code itself needn't be aware of Numba at all, whereas presently (in this PR and in the CPU target |
@@ -7,3 +7,10 @@ | |||
|
|||
compile_ptx = None | |||
compile_ptx_for_current_device = None | |||
|
|||
|
|||
class DeviceFunctionTemplate: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment explaining why it's an empty marker class.
This pull request is marked as stale as it has had no activity in the past 3 months. Please respond to this comment if you're still interested in working on this. Many thanks! |
Still interested in this, just haven't got it to the top of my pile in a long while :-) |
This pull request is marked as stale as it has had no activity in the past 3 months. Please respond to this comment if you're still interested in working on this. Many thanks! |
I still intend to address this one day when it bubbles to the top of my queue. |
This pull request is marked as stale as it has had no activity in the past 3 months. Please respond to this comment if you're still interested in working on this. Many thanks! |
This PR adds the
jit_module
decorator to CUDA asnumba.cuda.jit_module
. It is slightly different to thenumba.jit_module
function, in that it compiles functions as device functions by default, but can be made to compile functions as kernels by passingdevice=False
to it. This is handy for jitting some functions as device functions and others as kernels by calling it twice in a module, and the example added to the documentation reflects this.Whilst writing tests I noticed that
FakeCUDAKernel
from the simulator stores the Python function asfn
instead ofpy_func
, which is a discrepancy between it and the real dispatcher - to make this more consistent I've movedfn
topy_func
and provided a propertyfn
with a deprecation warning to support the old name - though, it's unlikely this was ever used by anyone as it's not advertised and specific to the simulator only.