-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
86990 range mps support #91075
86990 range mps support #91075
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91075
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 044efc5: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @kulinseth, first time contributing - I have signed the CLA and added my email for future commits, do I need to reopen this PR to pass that check or is this one good as it is? |
/easycla |
@OwenPendrighElliott can you please sign up the CLA. |
Hi @kulinseth, I have signed the CLA |
/easycla |
73d9dcc
to
addb239
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good
@OwenPendrighElliott the changes look good - the CLA it seems that it's still not signed. Can you please make sure you signed it? |
@DenisVieriu97 I think my email and user name are missing on the commit which is causing the check to fall over, would you like me to close this PR, commit again and open a new PR? Or is there a better fix? |
/easycla |
|
addb239
to
1122588
Compare
1122588
to
c209664
Compare
c209664
to
3f8f727
Compare
Thanks @OwenPendrighElliott |
@pytorchbot merge -g |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 mandatory check(s) failed (Rule Dig deeper by viewing the failures on hud Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge -g |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Fixes #86990
I did observe that despite the documentation for torch.range, the existing implementations do not adjust their return type based off the arguments passed to them. The MPS implementation provided here behaves the same way as the existing CPU and CUDA implementations in this regard, hence the conversion to float32 in the test cases.