bring cuda export ci back by using A100 as target GPU (#19440)#19440
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19440
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 121 PendingAs of commit 0be438e with merge base 98da9d5 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104504782. |
This PR needs a
|
93231cc to
62855e2
Compare
Summary: Currently ci keeps crash by the error msg `Not enough SMs to use max_autotune_gemm mode` due to gpu resource limitation. Make ci always run on A100 to bring ci back Differential Revision: D104504782
Summary: Currently ci keeps crash by the error msg `Not enough SMs to use max_autotune_gemm mode` due to gpu resource limitation. Make ci always run on A100 to bring ci back Differential Revision: D104504782
62855e2 to
0be438e
Compare
Summary:
Currently ci keeps crash by the error msg
Not enough SMs to use max_autotune_gemm modedue to gpu resource limitation. Make ci always run on A100 to bring ci backDifferential Revision: D104504782