Skip to content

Conversation

@rwgk
Copy link
Collaborator

@rwgk rwgk commented Nov 19, 2025

This appears to be a copy-paste mishap between

# Skip if GPU Direct RDMA is supported (we want to test the unsupported case)
if not device.properties.gpu_direct_rdma_supported:
pytest.skip("This test requires a device that doesn't support GPU Direct RDMA")

and

# Skip if GPU Direct RDMA is supported (we want to test the unsupported case)
if device.properties.gpu_direct_rdma_supported:
pytest.skip("This test requires a device that doesn't support GPU Direct RDMA")

introduced with PR #1179.

It's inconsequential, except that the skip message is very confusing.

@copy-pr-bot
Copy link
Contributor

copy-pr-bot bot commented Nov 19, 2025

Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@rwgk rwgk requested a review from rparolin November 19, 2025 04:14
@rwgk
Copy link
Collaborator Author

rwgk commented Nov 19, 2025

I'm intentionally not running the CI for this change.

@rwgk rwgk changed the title Fix comment and skip message in test_memory.py::test_vmm_allocator_policy_configuration [no-ci] Fix comment and skip message in test_memory.py::test_vmm_allocator_policy_configuration Nov 19, 2025
@rwgk
Copy link
Collaborator Author

rwgk commented Nov 19, 2025

Trivial interactive testing should be fully sufficient to validate this change:

(TestVenv) PS C:\Users\rgrossekunst\forked\cuda-python\cuda_core> pytest -ra -s -v .\tests\test_memory.py -k test_vmm_allocator_policy_configuration
========================================================================= test session starts ==========================================================================
platform win32 -- Python 3.13.9, pytest-9.0.1, pluggy-1.6.0 -- C:\Users\rgrossekunst\forked\cuda-python\TestVenv\Scripts\python.exe
cachedir: .pytest_cache
benchmark: 5.2.3 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: C:\Users\rgrossekunst\forked\cuda-python\cuda_core
configfile: pytest.ini
plugins: benchmark-5.2.3
collected 41 items / 40 deselected / 1 selected

tests/test_memory.py::test_vmm_allocator_policy_configuration SKIPPED (This test requires a device that supports GPU Direct RDMA)

======================================================================= short test summary info ========================================================================
SKIPPED [1] tests\test_memory.py:401: This test requires a device that supports GPU Direct RDMA
================================================================== 1 skipped, 40 deselected in 0.10s ===================================================================

@leofang
Copy link
Member

leofang commented Nov 19, 2025

/ok to test

@leofang leofang enabled auto-merge (squash) November 19, 2025 17:17
@leofang leofang assigned leofang and rwgk and unassigned leofang Nov 19, 2025
@leofang leofang added enhancement Any code-related improvements P1 Medium priority - Should do cuda.core Everything related to the cuda.core module labels Nov 19, 2025
@leofang leofang added this to the cuda.core beta 10 milestone Nov 19, 2025
@leofang leofang merged commit 1e23bf8 into NVIDIA:main Nov 19, 2025
15 of 16 checks passed
@github-actions
Copy link

Doc Preview CI
Preview removed because the pull request was closed or merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cuda.core Everything related to the cuda.core module enhancement Any code-related improvements P1 Medium priority - Should do

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants