-
Notifications
You must be signed in to change notification settings - Fork 13.3k
ggml : split graph allocations according to backend max buffer size #15815
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* if the graph requires more memory than can fit into a single allocation, split it into multiple backend buffers * vulkan: report the actual max allocation size in buffer type interface
I don't understand the change yet, what you describe is how it was already working, at least how I understood it. The graph allocator is merging as many tensors into one allocation as possible, as long as it stays below the backend's max allocation size. We use the suballocation size in the Vulkan backend to reduce the allocation sizes for performance reasons, if possible. If a single tensor requires more than the actual max_allocation size, currently it will try to allocate that and usually the driver will respond with an exception. I don't think you are addressing this issue (and I don't think that's really possible from the GGML side). |
@0cc4m It was working like that for allocating weights. But the allocator for the compute buffers always ignored the backend max size. It batched all tensors into one large buffer and tried to allocate it, failing for Vulkan if it's >4GB. See eg. #11332 It's uncommon to hit that limitation with LLMs I think, they have huge weights but relatively small computation. For images (and video) it becomes an issue as soon as you increase resolution a little. A single tensor beyond the maximum allocation size is not possible, no change there. The reason "suballocation size" gets in the way here is that all allocations to be done are mapped out first, before trying to do the actual backend allocation. The algorithm needs to know the actual maximum here, not a "soft" maximum. I'd also argue that in this case you don't want to artificially reduce batching, as it will increase total memory required due to increased fragmentation (harder to reuse memory of previous computations). I'm sure we can find a way to reintrodce the soft max for weight allocation though, just wasn't sure why it was there exactly and how big of a difference it makes. |
I understand, thank you for the explanation. But we do need to keep that suballocation |
Okay I read some of #11520 #12434 and related issues... in summary smaller buffers help with host-visible memory and driver issues. I see 2 options:
Option 2 is nicer I guess since it can also avoid those allocation problems for the compute buffers. It increases complexity of I'll wait a bit to see if there are more opinions before implementing something. |
@Acly Sorry for the delay! I'm fine with both options, either is an improvement to the problems we're having with allocation sizes, but I'm not too familiar with the allocation code. Any opinions @jeffbolznv @slaren @ggerganov ? |
…nks. revert to use suballocation_block_size as max chunk size for vulkan.
@0cc4m no worries, I didn't get around to working on it until yesterday, and also expect it needs some more eyes first I tried option 2, and think it's generally an improvement, so I'm pushing it on this branch. Reverted the change in Vulkan backend, the 1GB maximum will now be used for both weights and compute allocations. It may cost a bit more memory when compute size > 1GB, but avoids the compatibility issues. Single compute tensor with >1 GB works too with similar behavior to the weights allocation (attempt to allocate anyway and let the backend return an error). Going to run a few more tests on this state when I get around to it. |
From my side this is fine, of course. Your test passes with the Vulkan backend on my hardware. |
@slaren Could you take a look? |
Just tested 44d3ee4 (RX 7600 XT and Ryzen 5 3400G, radv and amdvlk, Linux 6.12). Seems to be working fine, and it fixed leejet/stable-diffusion.cpp#791 for me. |
* simpler, don't need loops to map between local/global offsets * touches more code
…ring multiple times * make vbuffer allocation follow the same logic as backend_buffer did before
… new one has been created
* they're still allocated together, but start/end of each chunk is tracked, and allocate/free iterate over sub-ranges * exhaust freed blocks of all chunks before considering their last blocks with unallocated space * start with 0 chunks/blocks and create chunks as needed * allow the last chunk to grow beyond max size
…h one * needs a bit more memory/allocations/indirections, but code is simpler
40232e7
to
ed69280
Compare
Any idea why |
Not sure, couldn't reproduce it either so far, I'll try with matching GCC version |
Looks like it was an issue with ccache, it passed now after deleting the cache. |
…gml-org#15815) * ggml : make gallocr respect the backend's max buffer size * if the graph requires more memory than can fit into a single allocation, split it into multiple backend buffers * vulkan: report the actual max allocation size in buffer type interface * fix missing newline, apple-clang warning * track size of individual chunks in ggml_dyn_tallocr and raise max chunks. revert to use suballocation_block_size as max chunk size for vulkan. * track (chunk, offset) pairs instead of "global" offsets through gallocr. * simpler, don't need loops to map between local/global offsets * touches more code * fix dyn_tallocr_max_size and initialization * fix memory leak when buffers are reused due to same buffer type appearing multiple times * make vbuffer allocation follow the same logic as backend_buffer did before * continue to use leftover unallocated space of previous chunks after a new one has been created * treat free blocks of each chunk as separate list * they're still allocated together, but start/end of each chunk is tracked, and allocate/free iterate over sub-ranges * exhaust freed blocks of all chunks before considering their last blocks with unallocated space * start with 0 chunks/blocks and create chunks as needed * allow the last chunk to grow beyond max size * refactor: move adding new free block and new chunk into separate functions * allocate chunks individually with a separate free-blocks list for each one * needs a bit more memory/allocations/indirections, but code is simpler * fix warnings (missing static) & debug checks
…gml-org#15815) * ggml : make gallocr respect the backend's max buffer size * if the graph requires more memory than can fit into a single allocation, split it into multiple backend buffers * vulkan: report the actual max allocation size in buffer type interface * fix missing newline, apple-clang warning * track size of individual chunks in ggml_dyn_tallocr and raise max chunks. revert to use suballocation_block_size as max chunk size for vulkan. * track (chunk, offset) pairs instead of "global" offsets through gallocr. * simpler, don't need loops to map between local/global offsets * touches more code * fix dyn_tallocr_max_size and initialization * fix memory leak when buffers are reused due to same buffer type appearing multiple times * make vbuffer allocation follow the same logic as backend_buffer did before * continue to use leftover unallocated space of previous chunks after a new one has been created * treat free blocks of each chunk as separate list * they're still allocated together, but start/end of each chunk is tracked, and allocate/free iterate over sub-ranges * exhaust freed blocks of all chunks before considering their last blocks with unallocated space * start with 0 chunks/blocks and create chunks as needed * allow the last chunk to grow beyond max size * refactor: move adding new free block and new chunk into separate functions * allocate chunks individually with a separate free-blocks list for each one * needs a bit more memory/allocations/indirections, but code is simpler * fix warnings (missing static) & debug checks
…r size (ggml-org#15815)" This reverts commit f2a789e.
…r size (ggml-org#15815)" This reverts commit f2a789e.
…r size (ggml-org#15815)" This reverts commit f2a789e.
…r size (ggml-org#15815)" This reverts commit f2a789e.
…r size (ggml-org#15815)" This reverts commit f2a789e.
…r size (ggml-org#15815)" This reverts commit f2a789e.
…r size (ggml-org#15815)" This reverts commit f2a789e.
…r size (ggml-org#15815)" This reverts commit f2a789e.
…r size (ggml-org#15815)" This reverts commit f2a789e.
} | ||
|
||
size_t cur_size = galloc->buffers[i] ? ggml_backend_buffer_get_size(galloc->buffers[i]) : 0; | ||
size_t cur_size = galloc->buffers[i] ? ggml_vbuffer_size(galloc->buffers[i]) : 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be the cause of #16383. It is necessary to check each sub-buffer to determine if it needs to be reallocated, not just the overall size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right that this can be an issue, it's something I missed after changing chunks to have individual size.
It would still have to conincidentally end up with the same size, I can try to reproduce if that's what is happening.
(edit: nvm it just needs a chunk to be larger while total size is same or smaller, which is not so unlikely)
This PR makes
ggml_gallocr
distribute allocations to multiple backend buffers depending on the maximum allocation size reported by the backend. This allows eg. the Vulkan backend to process graphs which require >4 GB of memory.I tried to avoid risk and minimize changes/complexity:
Implementation:
ggml_gallocr
: almost no changes here, it continues to operate with contiguous offsets in [0, SIZE_MAX). Instead of usingggml_backend_buffer
directly it now usesvbuffer
vbuffer
: small local abstraction which distributes a virtual memory range to one or more backend buffers ("chunks")ggml_dyn_tallocr
: is now aware of backend's maximum buffer size to ensure tensors are not allocated across multiple chunks. This is done by setting the size of the lastfree_block
to the maximum buffer size, and introducing a new block at the end of the range when additional memory is required.Vulkan: modified to report actual maximum allocation size. This will change how weights are allocated. I'm not sure how important it is to keep the previous behavior there, happy to discuss alternatives.