-
Notifications
You must be signed in to change notification settings - Fork 22.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for aten::remainder.Tensor_out for MPS backend #87582
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/87582
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 FailuresAs of commit 5219485: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @yash-dani , thanks so much for working on this! I seem to be having this exact issue when running pytorch on my M1, but the build you suggested is failing, same as above. Any chacne you could have a second to look at it? Thank you so much for doing this :) Kind regards, |
@@ -231,5 +231,24 @@ void unary_op(const Tensor& self, const Tensor& output, std::string op_name, Una | |||
}); | |||
} | |||
|
|||
|
|||
TORCH_IMPL_FUNC(remainder_out_mps) (const Tensor& self, const Tensor& output) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function definition is not correct. Infact it should be in BinaryOps.mm.
Please take a look at the formula to implement:
https://pytorch.org/docs/stable/generated/torch.remainder.html
I have attached the patch , which should work. |
@yash-dani , can you rebase and apply the patch. |
Hi @kulinseth not sure if it would help but maybe I can take this over? I tried to apply the patch to a git copy of the current pytorch repository but was not able to compile on my M1, but happy to fork my own pytorch and apply this patch? Kind regards, |
I've also send a pull request to @yash-dani where I rebased his repo and added the patch. |
I can rebase and merge! |
Hey @kulinseth @gpomeranz your patch gives me the following compile error:
What would be the best way to resolve? Thanks! |
@yash-dani , seems like the patch and your branch has diverged since I had created that patch.
Basically it need to be added to the BinaryOps.mm |
Thanks @gpomeranz for giving it a try? What issue did you run into ? |
@kulinseth I tried to manually apply the patches in the latest branch of pytorch, as I've never applied a patch to a github repo before. I unfortunately was not able to compile pytorch, so it stopped for me there. I don't remember what the error was but I believe it has something to do with my clang compiler (even though I use the latest brew one). I will try to redownload pytorch and apply the patch and will post here what my error was. I think if @yash-dani, who has more experience, manually adds the patch and tries to compile it will succeed. |
This is the error that I get when manually adding the patch and compiling pytorch: /opt/homebrew/opt/llvm/bin/../include/c++/v1/__algorithm/iterator_operations.h:131:12: error: calling a private constructor of class 'c10::impl::ListElementReference<at::Tensor, std::__wrap_iter<c10::IValue *>>' |
You do need this conditional block to handle a MPS bug.
I applied the patch to the latest of master in PR #92139 |
I can confirm that the above change indeed fixes the build problems and I was able to build pytorch from source using this code addition to the latest release. Looking forward to have this incorporated so that I can run my gpu accelerated code in R! |
Hi, |
We have pulled in the remainder PR. |
@yash-dani , please re-open if you think we missed something. |
Fixes #86806