Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA test: test_reinterpret_array_type is failing on NumPy 1.23 #8529

Closed
2 tasks done
stuartarchibald opened this issue Oct 21, 2022 · 1 comment · Fixed by #8537
Closed
2 tasks done

CUDA test: test_reinterpret_array_type is failing on NumPy 1.23 #8529

stuartarchibald opened this issue Oct 21, 2022 · 1 comment · Fixed by #8537
Labels
bug - failure to compile Bugs: failed to compile valid code bug - regression A regression against a previous version of Numba CUDA CUDA related issue/PR

Comments

@stuartarchibald
Copy link
Contributor

Reporting a bug

  • I have tried using the latest released version of Numba (most recent is
    visible in the change log (https://github.com/numba/numba/blob/main/CHANGE_LOG).
  • I have included a self contained code sample to reproduce the problem.
    i.e. it's possible to run as 'python bug.py'.

Reproducer is simply install Numba (from 0.56 series or current main) alongside NumPy>=1.23 and then run the test:

./runtests.py numba.cuda.tests.cudapy.test_array_methods.TestCudaArrayMethod.test_reinterpret_array_type

Failure is roughly:

<snip>
numba/np/arrayobj.py, line 2540, in array_view
numba/core/base.py, line 559, in get_function
numba/core/base.py, line 561, in get_function

NotImplementedError: No definition for lowering <built-in method impl of ....>(array(uint8, 1d, C), class(int32)) -> none

I suspect it's this that's causing the issue:

numba/numba/np/arrayobj.py

Lines 2559 to 2569 in f4b6fe0

if numpy_version >= (1, 23):
# NumPy 1.23+ bans views using a dtype that is a different size to that
# of the array when the last axis is not contiguous. For example, this
# manifests at runtime when a dtype size altering view is requested
# on a Fortran ordered array.
tyctx = context.typing_context
fnty = tyctx.resolve_value_type(_compatible_view)
_compatible_view_sig = fnty.get_call_type(tyctx, (*sig.args,), {})
impl = context.get_function(fnty, _compatible_view_sig)
impl(builder, args)

the lowering for array.view is being "borrowed" by the CUDA target, but this branch for NumPy 1.23 resolves an @overload (following) to determine if it's a compatible view and something goes wrong. I would guess that it either refuses to compile on CUDA (which is probably correct) or the current compilation target gets mixed up leading to similar.

numba/numba/np/arrayobj.py

Lines 2498 to 2539 in f4b6fe0

@overload(_compatible_view)
def ol_compatible_view(a, dtype):
"""Determines if the array and dtype are compatible for forming a view."""
# NOTE: NumPy 1.23+ uses this check.
# Code based on:
# https://github.com/numpy/numpy/blob/750ad21258cfc00663586d5a466e24f91b48edc7/numpy/core/src/multiarray/getset.c#L500-L555 # noqa: E501
def impl(a, dtype):
dtype_size = _intrin_get_itemsize(dtype)
if dtype_size != a.itemsize:
# catch forbidden cases
if a.ndim == 0:
msg1 = ("Changing the dtype of a 0d array is only supported "
"if the itemsize is unchanged")
raise ValueError(msg1)
else:
# NumPy has a check here for subarray type conversion which
# Numba doesn't support
pass
# Resize on last axis only
axis = a.ndim - 1
p1 = a.shape[axis] != 1
p2 = a.size != 0
p3 = a.strides[axis] != a.itemsize
if (p1 and p2 and p3):
msg2 = ("To change to a dtype of a different size, the last "
"axis must be contiguous")
raise ValueError(msg2)
if dtype_size < a.itemsize:
if dtype_size == 0 or a.itemsize % dtype_size != 0:
msg3 = ("When changing to a smaller dtype, its size must "
"be a divisor of the size of original dtype")
raise ValueError(msg3)
else:
newdim = a.shape[axis] * a.itemsize
if newdim % dtype_size != 0:
msg4 = ("When changing to a larger dtype, its size must be "
"a divisor of the total size in bytes of the last "
"axis of the array.")
raise ValueError(msg4)
return impl

The same test runs fine on NumPy 1.22.

CC @gmarkall.

@stuartarchibald stuartarchibald added CUDA CUDA related issue/PR bug - regression A regression against a previous version of Numba bug - failure to compile Bugs: failed to compile valid code labels Oct 21, 2022
@stuartarchibald
Copy link
Contributor Author

Think this will fix it:

diff --git a/numba/np/arrayobj.py b/numba/np/arrayobj.py
--- a/numba/np/arrayobj.py
+++ b/numba/np/arrayobj.py
@@ -2467,7 +2467,7 @@ def _compatible_view(a, dtype):
     pass
 
 
-@overload(_compatible_view)
+@overload(_compatible_view, target='generic')
 def ol_compatible_view(a, dtype):
     """Determines if the array and dtype are compatible for forming a view."""
     # NOTE: NumPy 1.23+ uses this check.

gmarkall added a commit to gmarkall/numba that referenced this issue Oct 25, 2022
This is to fix Issue numba#8529, where test_reinterpret_array_type fails
on CUDA with NumPy 1.23 because this overload is needed. Making it
accessible to the CUDA target should resolve the issue.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug - failure to compile Bugs: failed to compile valid code bug - regression A regression against a previous version of Numba CUDA CUDA related issue/PR
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant