skip cuda operations when running qwen 3.5 moe on other backend#19095
skip cuda operations when running qwen 3.5 moe on other backend#19095
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19095
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 2 New Failures, 4 Cancelled Jobs, 1 Unrelated FailureAs of commit d118b2b with merge base c48ea12 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOBS - The following jobs were cancelled. Please retry:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
JacobSzwejbka
left a comment
There was a problem hiding this comment.
ifdefs ok for now can we find a better way to do this
This PR makes GPU related operator cuda-backend specific, to bring metal qwen 3.5 moe ci back