-
Notifications
You must be signed in to change notification settings - Fork 25.9k
Fix mvlgamma_ FPE crash on x86 with integer input #164230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/164230
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 7aec5bc with merge base 5623628 ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "topic: not user facing" |
|
Hi @malfet please review this PR once you are available. Thanks! |
|
Hi @malfet please review this PR, thanks! |
| args = args.add(self.unsqueeze(-1)); | ||
| const auto p2_sub_p = static_cast<double>(p * (p - 1)); | ||
| return self.copy_(args.lgamma_().sum(-1).add_(p2_sub_p * std::log(c10::pi<double>) * QUARTER)); | ||
| return at::mvlgamma_out(self, self, p); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know why the argument order is flipped compared to the function just below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The argument order flipped because line 911 calls the public API at::mvlgamma_out(out, self, p) while line 914 defines the native implementation with signature (self, p, result). These are two different functions i.e., the public API wrapper vs the native implementation. According to the generated header, the public API is at::Tensor & mvlgamma_out(at::Tensor & out, const at::Tensor & self, int64_t p). So the call at::mvlgamma_out(self, self, p) correctly maps to (out=self, self=self, p=p).
116536d to
def3b05
Compare
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
def3b05 to
2a562e7
Compare
albanD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the details. That sounds good!
Lint needs fixing but then you can merge!
|
hi @albanD please can you help me to understand how can I resolve such type of linting failures? https://github.com/pytorch/pytorch/actions/runs/19327144562/job/55280965920?pr=164230 |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
bbc4e69 to
7aec5bc
Compare
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Fixes #161871.
Behaviour on arm:
and on x86:
cc: @malfet