-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check for boolean values as argument on pow function. #114133
Conversation
…the same as 1 and False should be the same as 0 as exponent argument
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/114133
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 0d4a89c with merge base 7d5e8c1 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs tests The tests you want to run are in |
Also note that this PR is doing the opposite of what we proposed to do in the issue, which is always fail on |
Yeah, you're absolutely correct. I did try to do what you asked but it was way more complicated but I wanted to bring this to the table because it was an easy change that creates a standard behaviour between everything considering standard Python behaviour. But, of course, if you don't think this is a good idea I won't mind just closing this and moving forward with your suggestion. You just got to say the words :) [I know that you might have said that already but I'm not sure] If I understood code correctly to disable bool I would have to change EDIT: Or I could test for types inside CPP function and for I just found this:
Perhaps I could adapt this into
Noted! Regardless of any strategy I think that's a good test, thank you! Thanks for such fast response, I really appreciate your input! 😄 |
You just need to TORCH_CHECK that the input is not boolean, no? At any rate, I'm fine with making the behaviour equal to the current behaviour of CPU or CUDA, whatever works as long as it's tested. |
I tested both my branch and Three questions:
Thanks a lot, see you around! (It took me a while because I didnt have scipy and there were more failing because of that [and also because my cpu is slow 😅 ]) |
You need to add a sample to the op info that tests this case. One that fails in main and passes with this PR |
Co-authored-by: Gary Yao <garyyaoresearch@gmail.com>
Duh! That was obvious, sorry about that 😅 Yes, about OpInfo I went ahead and read the comments/instructions about it to understand how they work. But I knew there should be something written already and then I found it 😄 But it was good info to look for OpInfo, only after that I could find about I just changed Then, when I run the main branch I will have:
But for this branch it runs fine:
I added @garyyaoresearch as co author as he gave me the courtesy to mention that he was working on that too. We been changing mails since then :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fair enough
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Hello everyone! 😄 Also @lezcano , nice to meet you! :) Sorry if I miss anything, this is my first time around here. 🙃 This PR basically makes the same behaviour for cuda when using `torch.pow`. Basically Python considers True as 1 and False as 0. I just added this check into `pow` function. From what I understood, when I do `.equal` for `Scalar` that is boolean, I'm sure that types match so that won't cause more trouble. I know that the issue suggest to disable this case but that could be a little more complicated, in my humble opinion. And that can create some compability problems too, I guess. My argument is that code below is correct for native language, so I guess it does makes sense sending booleans as Scalar. ``` $ x = True $ x + x 2 ``` This was my first test: ``` Python 3.12.0 | packaged by Anaconda, Inc. | (main, Oct 2 2023, 17:29:18) [GCC 11.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.pow(torch.tensor([1, 2], device='cuda'), True) tensor([1, 2], device='cuda:0') >>> torch.pow(torch.tensor([1, 2]), True) tensor([1, 2]) >>> torch.pow(torch.tensor([1, 2]), False) tensor([1, 1]) >>> torch.pow(torch.tensor([1, 2], device='cuda'), False) tensor([1, 1], device='cuda:0') ``` I've run `test_torch.py` and got following results, so my guess is that I didn't break anything. I was just looking for a test that uses linear regression, as suggested. ``` Ran 1619 tests in 52.363s OK (skipped=111) [TORCH_VITAL] Dataloader.enabled True [TORCH_VITAL] Dataloader.basic_unit_test TEST_VALUE_STRING [TORCH_VITAL] CUDA.used true ``` (I can paste whole log, if necessary) If this is a bad idea overall, dont worry about it. It's not a big deal, it's actually a two line change 😅 so can we talk of how do things in a different strategy. For the record I've signed the agreement already. And I didn't run linter because it's not working 😞 . Looks like PyYaml 6.0 is broken and there's a 6.0.1 fix already but I have no idea how to update that 😅 Fixes pytorch#113198 Pull Request resolved: pytorch#114133 Approved by: https://github.com/lezcano
Hello everyone! 😄
Also @lezcano , nice to meet you! :)
Sorry if I miss anything, this is my first time around here. 🙃
This PR basically makes the same behaviour for cuda when using
torch.pow
. Basically Python considers True as 1 and False as 0. I just added this check intopow
function. From what I understood, when I do.equal
forScalar
that is boolean, I'm sure that types match so that won't cause more trouble.I know that the issue suggest to disable this case but that could be a little more complicated, in my humble opinion. And that can create some compability problems too, I guess.
My argument is that code below is correct for native language, so I guess it does makes sense sending booleans as Scalar.
This was my first test:
I've run
test_torch.py
and got following results, so my guess is that I didn't break anything. I was just looking for a test that uses linear regression, as suggested.(I can paste whole log, if necessary)
If this is a bad idea overall, dont worry about it. It's not a big deal, it's actually a two line change 😅 so can we talk of how do things in a different strategy.
For the record I've signed the agreement already. And I didn't run linter because it's not working 😞 . Looks like PyYaml 6.0 is broken and there's a 6.0.1 fix already but I have no idea how to update that 😅
Fixes #113198