Skip to content

Support zero-length arrays in numba.cuda #6050

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
eric-wieser opened this issue Jul 29, 2020 · 8 comments
Open

Support zero-length arrays in numba.cuda #6050

eric-wieser opened this issue Jul 29, 2020 · 8 comments
Labels
CUDA CUDA related issue/PR feature_request good first issue A good issue for a first time contributor

Comments

@eric-wieser
Copy link
Contributor

As an example,

@numba.cuda.jit
def foo():
    x = numba.cuda.local.array(shape=(2, 0), dtype=numba.int64)

foo()

gives ValueError: array length <= 0.

Currently these lines contain a workaround because of this issue:

if ndim == 0:
# the (2, ndim) allocation below is not yet supported, so avoid it
@cuda.jit
def kernel(lhs, rhs):
lhs[()] = rhs[()]
return kernel
@cuda.jit
def kernel(lhs, rhs):
location = cuda.grid(1)
n_elements = 1
for i in range(lhs.ndim):
n_elements *= lhs.shape[i]
if location >= n_elements:
# bake n_elements into the kernel, better than passing it in
# as another argument.
return
# [0, :] is the to-index (into `lhs`)
# [1, :] is the from-index (into `rhs`)
idx = cuda.local.array(
shape=(2, ndim),
dtype=types.int64)

@gmarkall gmarkall added CUDA CUDA related issue/PR feature_request good first issue A good issue for a first time contributor labels Jul 29, 2020
@gmarkall
Copy link
Member

Thanks for the report / request!

@gmarkall
Copy link
Member

See also #5234

@eric-wieser
Copy link
Contributor Author

#5234 is about arrays with shape () (len(shape) == 0), this issue is about arrays with shape (0,), (0, n) etc (0 in shape).

@gmarkall
Copy link
Member

Yes, I don't think they're duplicates, I was just trying to gather together a few shape-related issues in case someone is looking at fixing up things in these areas.

@eric-wieser
Copy link
Contributor Author

eric-wieser commented Aug 17, 2020

I think I already fixed the other one :)

@JuanFML
Copy link

JuanFML commented Dec 2, 2020

Hello everyone! I am new here and I wanted to see if I could work on this as my first issue?

@AruparnaMaity
Copy link

Hi, @eric-wieser , @gmarkall is this issue still open? Can I contribute in any way?

@gmarkall
Copy link
Member

@AruparnaMaity If you can see a way to fix it, you could open a PR with the fix, if you'd like - note that as per https://numba.discourse.group/t/rfc-moving-the-cuda-target-to-a-new-package-maintained-by-nvidia/2628 new PRs should go to https://github.com/nvidia/numba-cuda

Many thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CUDA CUDA related issue/PR feature_request good first issue A good issue for a first time contributor
Projects
None yet
Development

No branches or pull requests

4 participants