DISABLED test_transformer_training_is_seq_parallel_True (__main__.DistTensorParallelExampleTest) #125991
Labels
module: flaky-tests
Problem is a flaky test in CI
module: rocm
AMD GPU support for Pytorch
oncall: distributed
Add this issue/PR to distributed oncall triage queue
skipped
Denotes a (flaky) test currently skipped in CI.
Platforms: rocm
This test was disabled because it is failing on main branch (recent examples).
Same as #125918
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225 @chauhang @d4l3k @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @clee2000
The text was updated successfully, but these errors were encountered: