-
Notifications
You must be signed in to change notification settings - Fork 223
Checking for RDMA support before allocating via VMM in test suite #1179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Checking for RDMA support before allocating via VMM in test suite #1179
Conversation
|
/ok to test aba3a17 |
This comment has been minimized.
This comment has been minimized.
leofang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It occurs to me that none of us (Ben, Keith, myself) read the docs when getting the VMM PR merged. The docs made it clear that there is one device attribute that we should check (which is typical to all major CUDA features, as we did in the IPC mempool test helper).
https://docs.nvidia.com/cuda/cuda-c-programming-guide/#query-for-support
Translate this to cuda.core, we need to check
dev = Device()
if not dev.properties.virtual_memory_management_supported:
pytest.skip(...)|
Greptile encountered an error while reviewing this PR. Please reach out to support@greptile.com for assistance. |
|
/ok to test ebc6818 |
1 similar comment
|
/ok to test ebc6818 |
As discussed in person, migrated the majority of the test suites skip checks to use |
|
This PR addresses issues with GPU Direct RDMA support validation in the Virtual Memory Resource (VMM) allocator and improves test coverage for memory management functionality.