-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove padding from device_memory_resource #1278
Conversation
See #865 (comment) for more of a summary on the decision here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a plan for downstream testing before merging, @vyasr ?
Thanks for the reminder Mark. I've created test PRs:
|
I created an issue for cuSpatial as well. Assigning you @vyasr . rapidsai/cuspatial#1164 |
Updated with a cuspatial PR. |
OK, we now have tests passing on all the testing PRs (cugraph's has an unrelated error blocking wheel builds). Think this is good to go now. |
/merge |
Sorry for jumping in late. What about padding when using arena allocator? It seems that we need to modify this too: rmm/include/rmm/mr/device/arena_memory_resource.hpp Lines 137 to 144 in 0c08dd5
|
Could you please raise an issue? |
Done. |
After `rmm` removed memory padding (rapidsai/rmm#1278), some of cuIO code started to have out-of-bound access issues because many of its compute kernels shift the input pointers back and forth to satisfy some alignment. This adds back padding to various memory buffers so the buffers now will have some extra space enough for such shifting. With this fix, the reported issues (#13567, #13571, #13570) no longer show up. Closes: * #13567 * #13571 * #13570 Authors: - Nghia Truong (https://github.com/ttnghia) Approvers: - David Wendt (https://github.com/davidwendt) - Bradley Dice (https://github.com/bdice) - Vyas Ramasubramani (https://github.com/vyasr) - Vukasin Milovanovic (https://github.com/vuule) URL: #13586
Description
This PR removes the
align_up
call that enforces 8B padding on all RMM device memory allocations. Padding, where needed, should be the responsibility of callers, possibly with the help of a padding adapter. It should not be enforced by rmm.I looked at all other calls to
align_up
in the library but didn't find any others that needed removing. The other calls all seemed to be used to guarantee specific intentional alignment requirements of other memory resources or adapters. I would appreciate verification from reviewers, though.I've labeled this PR as breaking because it could break consumers that were implicitly relying on the current padding behavior.
Resolves #865
Resolves #1156
Checklist