New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for structured types in Device Arrays #7226
Add support for structured types in Device Arrays #7226
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I started leaving some comments on the diff, but then I realised that I don't think this works in general. The following:
from numba import cuda
import numpy as np
recwithmat = np.dtype([('i', np.int32),
('j', np.float32, (3, 3))])
recwithrecwithmat = np.dtype([('x', np.int32), ('y', recwithmat)])
arr = np.zeros(1, dtype=recwithrecwithmat)
d_arr = cuda.to_device(arr)
d_arr[0]['y']['i'] = 1
# Prints 0, expected 1
print(d_arr[0]['y']['i'])
d_arr[0]['y']['j'][0, 0] = 2
# Prints 0.0, expected 2.0
print(d_arr[0]['y']['j'][0, 0])
p_arr = cuda.pinned_array(1, dtype=recwithrecwithmat)
p_arr[0]['y']['i'] = 1
# Prints 1 as expected
print(p_arr[0]['y']['i'])
p_arr[0]['y']['j'][0, 0] = 2
# Prints 2.0 as expected
print(p_arr[0]['y']['j'][0, 0])
gives:
0
0.0
1
2.0
whereas I'd expect
1
2.0
1
2.0
under correct operation. Are the get and setitem implementations still incorrect, or am I misunderstanding what this PR fixes?
@testhound Is this ready for another review? |
@gmarkall yes it is. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I run the tests with:
python -m numba.runtests numba.cuda.tests -v -m
I get:
======================================================================
ERROR: test_host_alloc_mapped (numba.cuda.tests.cudadrv.test_host_alloc.TestHostAlloc)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/gmarkall/numbadev/numba/numba/cuda/tests/cudadrv/test_host_alloc.py", line 43, in test_host_alloc_mapped
self.assertTrue(all(ary == 123))
File "/home/gmarkall/numbadev/numba/numba/cuda/cudadrv/devices.py", line 224, in _require_cuda_context
return fn(*args, **kws)
File "/home/gmarkall/numbadev/numba/numba/cuda/cudadrv/devicearray.py", line 117, in __getitem__
return self._do_getitem(item)
AttributeError: 'MappedNDArray' object has no attribute '_do_getitem'
======================================================================
ERROR: test_host_operators (numba.cuda.tests.cudadrv.test_host_alloc.TestHostAlloc)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/gmarkall/numbadev/numba/numba/cuda/tests/cudadrv/test_host_alloc.py", line 51, in test_host_operators
ary[:] = range(10)
File "/home/gmarkall/numbadev/numba/numba/cuda/cudadrv/devices.py", line 224, in _require_cuda_context
return fn(*args, **kws)
File "/home/gmarkall/numbadev/numba/numba/cuda/cudadrv/devicearray.py", line 126, in __setitem__
return self._do_setitem(key, value)
AttributeError: 'MappedNDArray' object has no attribute '_do_setitem'
======================================================================
ERROR: test_managed_array_attach_global (numba.cuda.tests.cudadrv.test_managed_alloc.TestManagedAlloc)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/gmarkall/numbadev/numba/numba/cuda/tests/cudadrv/test_managed_alloc.py", line 112, in test_managed_array_attach_global
self._test_managed_array()
File "/home/gmarkall/numbadev/numba/numba/cuda/tests/cudadrv/test_managed_alloc.py", line 98, in _test_managed_array
self.assertTrue(all(ary == 123.456))
File "/home/gmarkall/numbadev/numba/numba/cuda/cudadrv/devices.py", line 224, in _require_cuda_context
return fn(*args, **kws)
File "/home/gmarkall/numbadev/numba/numba/cuda/cudadrv/devicearray.py", line 117, in __getitem__
return self._do_getitem(item)
AttributeError: 'ManagedNDArray' object has no attribute '_do_getitem'
======================================================================
ERROR: test_managed_array_attach_host (numba.cuda.tests.cudadrv.test_managed_alloc.TestManagedAlloc)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/gmarkall/numbadev/numba/numba/cuda/tests/cudadrv/test_managed_alloc.py", line 115, in test_managed_array_attach_host
self._test_managed_array()
File "/home/gmarkall/numbadev/numba/numba/cuda/tests/cudadrv/test_managed_alloc.py", line 98, in _test_managed_array
self.assertTrue(all(ary == 123.456))
File "/home/gmarkall/numbadev/numba/numba/cuda/cudadrv/devices.py", line 224, in _require_cuda_context
return fn(*args, **kws)
File "/home/gmarkall/numbadev/numba/numba/cuda/cudadrv/devicearray.py", line 117, in __getitem__
return self._do_getitem(item)
AttributeError: 'ManagedNDArray' object has no attribute '_do_getitem'
======================================================================
FAIL: test_structured_array1 (numba.cuda.tests.cudadrv.test_cuda_devicerecord.TestRecordDtypeWithStructArrays)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/gmarkall/numbadev/numba/numba/cuda/tests/cudadrv/test_cuda_devicerecord.py", line 147, in test_structured_array1
self.assertEqual(ary[i]['d'], "%d" % x)
AssertionError: '1\x03\x00\x04' != '1'
----------------------------------------------------------------------
Are these passing on your system?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many thanks for the changes! This is now looking much closer. I've tested with the simulator and with the hardware target, and all tests pass. I also tested with compute-sanitizer
, and this doesn't introduce any new issues. There are some changes I'd suggest:
- I think it should be possible to remove the changes to
_numba_type_
, as suggested on the diff. - The "subarray case" branch in
_do_setitem
appears to be unused, or untested - does this need removing, or additional tests adding? - In an earlier iteration of the PR, you had a test with a string dtype in the structured type, but it was later removed, I guess because it was failing. I think it was failing because the string was only partially set in the test case. Perhaps it could be reinstated with:
diff --git a/numba/cuda/tests/cudadrv/test_cuda_devicerecord.py b/numba/cuda/tests/cudadrv/test_cuda_devicerecord.py
index d10910446..e2acd34d7 100644
--- a/numba/cuda/tests/cudadrv/test_cuda_devicerecord.py
+++ b/numba/cuda/tests/cudadrv/test_cuda_devicerecord.py
@@ -7,12 +7,14 @@ from numba.cuda.testing import skip_on_cudasim
from numba.np import numpy_support
from numba import cuda
+N_CHARS = 5
+
recordtype = np.dtype(
[
('a', np.float64),
('b', np.int32),
('c', np.complex64),
- ('d', np.int64)
+ ('d', (np.str_, N_CHARS))
],
align=True
)
@@ -135,7 +137,7 @@ class TestRecordDtypeWithStructArrays(CUDATestCase):
ary[i]['a'] = x / 2
ary[i]['b'] = x
ary[i]['c'] = x * 1j
- ary[i]['d'] = x * x
+ ary[i]['d'] = str(x) * N_CHARS
def test_structured_array1(self):
ary = self.sample1d
@@ -144,7 +146,7 @@ class TestRecordDtypeWithStructArrays(CUDATestCase):
self.assertEqual(ary[i]['a'], x / 2)
self.assertEqual(ary[i]['b'], x)
self.assertEqual(ary[i]['c'], x * 1j)
- self.assertEqual(ary[i]['d'], x * x)
+ self.assertEqual(ary[i]['d'], str(x) * N_CHARS)
def test_structured_array2(self):
ary = self.samplerec1darr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many thanks for the update! This tests OK for me locally with the hardware target, compute-sanitizer
, and the simulator.
The previous suggestions are addressed, but there are a couple of new suggestions on the new changes - after resolving these I think this will be good from my perspective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many thanks for the update - this looks good to me!
@stuartarchibald Could this have a buildfarm run please?
@gmarkall and @stuartarchibald checking on the status of this PR from three weeks ago. |
Buildfarm ID: |
This passed on 2dbd81c. |
Buildfarm ID: |
Passed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with the removed check.
Thanks for the patch @testhound, thanks for reviewing @gmarkall |
Presume this closes #6114 or is there more to do with respect to this? |
I've updated the description so that merging this PR will close that issue. |
Thanks for confirming and doing the update. |
This pull request addresses lack of support for structured data types with cuda device arrays. The problem is outlined in this issue:
Fixes #6114.
where attempting to set a structured type in a device array fails because there is no check for a DeviceRecord.