-
Couldn't load subscription status.
- Fork 25.7k
Allow cpu scalar to be moved to HPU in masked_fill_decomposition #127871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/127871
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 204568c with merge base 8f70bf7 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "topic: not user facing" |
|
@cpuhrsch could you please review the change? |
|
hi @cpuhrsch: any comment on this PR? Is it good for landing? |
62a4c37 to
5b1a2d6
Compare
|
Rebased to latest main branch |
|
@cpuhrsch any comments? is it possible to merge this patch? |
|
@pytorchbot rebase |
|
You don't have permissions to rebase this PR since you are a first time contributor. If you think this is a mistake, please contact PyTorch Dev Infra. |
|
@pytorchbot rebase |
|
You don't have permissions to rebase this PR since you are a first time contributor. If you think this is a mistake, please contact PyTorch Dev Infra. |
Extension of the condition allowing the cpu scalar to be moved to specific devices.
5b1a2d6 to
204568c
Compare
|
Hi @cpuhrsch |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command |
|
@pytorchbot merge -f "bypass rocm queue" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Extension of the condition allowing the cpu scalar to be moved to specific devices.
This fixes an HPU specific error:
torch._dynamo.exc.BackendCompilerFailed: backend='aot_hpu_training_backend' raised: RuntimeError: Expectedvalueto be on same device asaWhile executing %masked_fill : [num_users=1] = call_method[target=masked_fill](args = (%matmul, %expand_as, %tensor), kwargs = {})On the HPU in eager mode the problem doesn't occur because the pytorch's implementation is not used then.