-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow arrays larger than 4GB on GPUs #1638
Comments
The issue as reported is specific to Intel Extension for Pytorch and tracked in intel/intel-extension-for-pytorch#325. Let me check though whether oneDNN has any issues with buffers over 4 Gb. Relevant links: |
Looking at the user guide there's a bit more required than adding an OpenCL flag. Hopefully the driver stack evolved since last time we tried this. |
Is it possible to add new environmental variables into OneDNN to enable Also, is allocation done on OpenCL or Level Zero? |
@BA8F0D39, adding OpenCL flag alone will not solve the problem, as OpenCL is only a part of oneDNN codebase. The main programming model is SYCL and allocation are done via SYCL API. I still need to find out what the story here. Also @rjoursler pointed that there may be other issues related to buffers over 4 Gb. |
@BA8F0D39, pls refer to the comments in intel/intel-extension-for-pytorch#325. |
@vpirogov I was affected by the same issue. So I did some digging, and boy did it take a long time, but I think I have the gist of the story here. The short answer is that currently no, there's currently no way to pass anything from SYCL to Level Zero or OpenGL in terms of the flags mentioned here so anything from Intel that only uses SYCL and not specifically Level Zero or OpenCL can't do it. But it does seem theoretically possible. Say if you want to use malloc in oneDNN for a graph, for example. That malloc seems to be at oneDNN/src/graph/utils/allocator.hpp and uses sycl::aligned_alloc_shared. The Khronos documentation here for SYCL seems to specify a property_list parameter for this function so so we can try and see what happens to it going through everything. The chain goes like this: malloc() -> llvm/sycl/source/detail/usm/usm_impl.cpp from aligned_alloc_shared() to alignedAlloc() to alignedAllocInternal() -> llvm//sycl/plugins/unified_runtime/pi2ur.hpp at piextUSMSharedAlloc() -> llvm/sycl/plugins/unified_runtime/ur/adapters/level_zero/usm.cpp from urUSMSharedAlloc() to finally USMSharedAllocImpl(). Note that the last two calls in the chain is dependent on backend but I chose to follow the Level Zero backend. The property list seems to be read in urUSMSharedAlloc and USMSharedAllocImpl does get to use it after it gets read where in this example, you can see a read-only flag parameter being used in this fashion. So it should be possible. However, the issue is here that there's no provision or way to pass any of the over 4GB flags so you can get the over 4GB behavior wanted at the moment today in SYCL. Similarly, this is an issue with other calls like the non-CPU specific sycl::aligned_alloc_device call that is used for GPUs, FPGAs and etc. which also does the same thing with the equivalent OpenCL backend which affects Intel Extension for Pytorch which is where I am affected also by this issue. This seems to be the core problem. Not sure what kind of standardization or changes will be needed here so a possible over 4GB will survive and get passed down this chain of calls. This seems like a much bigger issue than the scope of this project unfortunately. I will be opening a corresponding bug report in Intel's LLVM repository to address it directly where I think the change needs to happen first and foremost. But getting a fix properly to propagate everywhere will probably take a while for everything to actually align. I do hope it gets prioritized but can understand why it will be difficult to do so. I do hope when any downstream changes lands that the appropriate changes can be made in oneDNN to fix this. |
Thanks for sharing, @simonlui. From our investigation it looks like the issue is in lack of hardware support for 64-bit int arithmetic in current generation of GPUs. This makes working with buffers exceeding 4GB impractical from performance perspective and complicated from software implementation perspective. |
Hi @vpirogov, if you can confirm, this is for just 1 single allocation, not in aggregate, correct? It's unfortunate to hear of this restriction but it doesn't make much sense to be given a way to opt out if this was the case. Is it more that it's not practical to do inside the compute runtime/driver? |
Right, the 4 Gb limit applies only to a single buffer size. You still can use all the memory on the GPU as long a single allocation does not exceed 4 Gb. The 'opt-out' is available in driver and OpenCL compiler, but it comes with non-trivial performance impact and non-production quality status. Additionally oneDNN has it's own code generator for performance critical functions (like matmul or convolution), which does not have int64 address math emulation. |
@vpirogov |
Given that IPEX issue #325 is still under discussion, this issue should not be closed prematurely. Nowadays, some large language models may already have parameter sizes exceeding 4GB. There exists a case where a model or an individual sequence is larger than 4GB, making it impossible to split into parts smaller than 4GB and then recombine on the GPU. This point has also been mentioned in other relevant discussions, so implementing a unified memory abstraction or a unified shared memory address space from a software perspective may be necessary. |
Summary
Allocating an array larger than 4GB on Intel Arc A770 16GB crashes or gives garbage results.
Allocating an array larger than 4GB on Intel CPUs is perfectly fine.
Version
Expected behavior
Example of allocating less than 4GB in A770 16GB. The mean is around 0.5 which is expected.
Example of allocating more than 4GB on CPU. The mean is around 0.5 which is expected.
Example of allocating more than 4GB on A770 16GB. The mean is around 0.014 which is completely wrong.
In conclusion, allocating more than 4GB crashes or returns complete garbage.
The text was updated successfully, but these errors were encountered: