-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ispc backend failing on large ranges #2109
Comments
It looks like the CUDA backend has similar but not identical problems to the ISPC backend: $ env CFLAGS="-I$CUDA_HOME/include -O -L$CUDA_HOME/lib64" futhark-aarch64 cuda badness.fut && echo 30 | ./badness
Warning: device compute capability is 9.0, but newest supported by Futhark is 8.7.
[100i64, 101i64, 102i64, 103i64, 104i64, 105i64, 106i64, 107i64, 108i64, 109i64, 110i64]
$ env CFLAGS="-I$CUDA_HOME/include -O -L$CUDA_HOME/lib64" futhark-aarch64 cuda badness.fut && echo 31 | ./badness
Warning: device compute capability is 9.0, but newest supported by Futhark is 8.7.
[100i64, 101i64, 102i64, 103i64, 104i64, 105i64, 106i64, 107i64, 108i64, 109i64, 110i64]
$ env CFLAGS="-I$CUDA_HOME/include -O -L$CUDA_HOME/lib64" futhark-aarch64 cuda badness.fut && echo 32 | ./badness
Warning: device compute capability is 9.0, but newest supported by Futhark is 8.7.
./badness: badness.c:7153: CUDA call
cuCtxSynchronize()
failed with error code 700 (an illegal memory access was encountered) This is on early Grace Hopper hardware (ARM CPU + H100 GPU). |
Well! That is not great. And on the AMD RX7900 at home, running your program is a very efficient way to shut down the graphical display. Fortunately, I don't think this is difficult to fix. It's probably an artifact of the old 32-bit size handling, which still lurks in some calculations in the code generator, but the foundations are 64-bit clean. I will take a look. |
The OpenCL backend works, so it's likely a problem in the single pass scan, which is used for the CUDA and HIP backends. The multicore backend also works, so the ISPC error is due to something ISPC-specific. |
For the GPU backends, this might actually just be an OOM error. Filtering is surprisingly memory expensive. With the GPU backends, n=29 requires 12GiB of memory. Presumably, n=30 would require 24GiB, n=31 48GiB, and n=32 96GiB - the latter beyond even what a H100 possesses. It's just a coincidence that this is somewhat close to the 32-bit barrier. The reason for the memory usage is as follows.
The mask array is fused with the scan producing the offset array, and so doesn't take any memory. I suppose there is no reason for the output array to be so large, however - I think it is only because our We should handle GPU OOM better. This has been on my list for a while, but the GPU APIs make it surprisingly difficult to do robustly. The ISPC error is probably a real 32-bit issue however, and I think I remember why: ISPC is very slow when asked to do 64-bit index arithmetic, so we reasoned nobody would want to use it with such large arrays anyway. |
I didn't realize that a literal range would still require full-size array construction. Thanks for explaining where all the memory is going. I just checked, and my Grace Hopper node has 80GiB of GPU memory and 512GiB of CPU memory. I assume this implies that Futhark is not using Unified Memory. That would be a nice option for the CUDA backend if you're looking for more work. 😀 I've heard that the penalty for GPUs accessing CPU memory is a lot lower on Grace Hopper than on previous systems, but I haven't yet tested that myself. |
I do in fact consider just enabling unified memory unconditionally on the CUDA backend. I did some experiments recently, and it doesn't seem to cause any overhead for the cases where you stay within GPU memory anyway (and let you finish execution when you don't). |
@athas Would you add unified memory to the HIP backend as well then? |
If it performs similarly, I don't see why not. But it would also be easy to make a configuration option indicating which kind of memory you prefer, as the allocation APIs are pretty much identical either way. In the longer term, this would also make some operations more efficient (such as directly indexing Futhark arrays from the host), but just making oversize allocations possible would be an easy start. |
Unified memory is now enabled by default on CUDA (if the device claims to support it). Not on HIP, because it seems to cause slowdowns sometimes. |
Here's a reproducer,
badness.fut
:This works as expected with the C backend:
The ISPC backend is not so happy, though:
Here's what I'm running:
The text was updated successfully, but these errors were encountered: