Skip to content

Conversation

naromero77amd
Copy link
Collaborator

@naromero77amd naromero77amd commented Jul 22, 2025

Unit test for this PR: #158165

This unit test verifies that a runtime error is raised when tensor.item() operation is captured in a cudagraph. Equally valid for ROCm and CUDA.

cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang

Copy link

pytorch-bot bot commented Jul 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/158878

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 2 Unrelated Failures

As of commit 944bc57 with merge base fd47401 (image):

NEW FAILURE - The following job has failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added module: rocm AMD GPU support for Pytorch topic: not user facing topic category labels Jul 22, 2025
@naromero77amd naromero77amd added the ciflow/rocm Trigger "default" config CI on ROCm label Jul 23, 2025
@naromero77amd
Copy link
Collaborator Author

Confirming that new UT passed for CUDA in linux-jammy-cuda12.8-py3.10-gcc11 / test (default, 2, 5, lf.linux.4xlarge.nvidia.gpu)

2025-07-22T23:05:26.8088709Z test_cuda.py::TestCuda::test_cuda_graph_tensor_item_not_allowed PASSED [0.1947s] [ 10%]

@naromero77amd
Copy link
Collaborator Author

Confirming new UT passed for ROCm in linux-jammy-rocm-py3.10 / test (default, 2, 6, linux.rocm.gpu.2)

test/test_cuda.py::TestCuda::test_cuda_graph_tensor_item_not_allowed, test/test_cuda.py::TestCuda::test_cuda_memory_leak_detection_propagates_errors, test/test_cuda.py::TestCuda::test_cudart_register, 

@jeffdaily
Copy link
Collaborator

@pytorchbot merge -f "unrelated CI failure, new UT is passing for relevant backends"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@naromero77amd naromero77amd deleted the rocm_ut_cudagraph_tensor_item_error branch July 23, 2025 20:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/rocm Trigger "default" config CI on ROCm Merged module: rocm AMD GPU support for Pytorch open source topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants