-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Closed
Labels
module: compiled autogradcompiled_autogradcompiled_autogradmodule: flex attentionmodule: higher order operatorstorch.cond and similartorch.cond and similarmodule: pt2-dispatcherPT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,PT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,module: rocmAMD GPU support for PytorchAMD GPU support for Pytorchoncall: pt2skippedDenotes a (flaky) test currently skipped in CI.Denotes a (flaky) test currently skipped in CI.triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
Platforms: rocm
This test was disabled because it is failing on main branch (recent examples).
Caused by this PR: #144533
The MI200 CI runners were passing all inductor UTs prior to merge. Post-merge on MI300 we see this failure. Hopefully just the one test.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @chauhang @penguinwu @zou3519 @ydwu4 @xmfan @yf225 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
Metadata
Metadata
Assignees
Labels
module: compiled autogradcompiled_autogradcompiled_autogradmodule: flex attentionmodule: higher order operatorstorch.cond and similartorch.cond and similarmodule: pt2-dispatcherPT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,PT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,module: rocmAMD GPU support for PytorchAMD GPU support for Pytorchoncall: pt2skippedDenotes a (flaky) test currently skipped in CI.Denotes a (flaky) test currently skipped in CI.triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module