Skip to content

Conversation

@RohitRathore1
Copy link
Collaborator

@RohitRathore1 RohitRathore1 commented Sep 30, 2025

Fixes #161871.

Behaviour on arm:

PyTorch version: 2.10.0a0+gitdef3b05
Architecture: arm64
Platform: Darwin
Processor: arm

Testing mvlgamma_ with integer tensor on arm64...
 Got expected error: mvlgamma: result type Long can't be cast to the desired output type Float

and on x86:

PyTorch version: 2.10.0a0+git1310d6a
Architecture: x86_64
Platform: Linux
Processor: x86_64

Testing mvlgamma_ with integer tensor on x86_64...
 Got expected error: mvlgamma: result type Long can't be cast to the desired output type Float

cc: @malfet

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/164230

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 7aec5bc with merge base 5623628 (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@RohitRathore1
Copy link
Collaborator Author

@pytorchbot label "topic: not user facing"

@pytorch-bot pytorch-bot bot added the topic: not user facing topic category label Sep 30, 2025
@mikaylagawarecki mikaylagawarecki added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Oct 2, 2025
@RohitRathore1
Copy link
Collaborator Author

Hi @malfet please review this PR once you are available. Thanks!

@RohitRathore1
Copy link
Collaborator Author

Hi @malfet please review this PR, thanks!

args = args.add(self.unsqueeze(-1));
const auto p2_sub_p = static_cast<double>(p * (p - 1));
return self.copy_(args.lgamma_().sum(-1).add_(p2_sub_p * std::log(c10::pi<double>) * QUARTER));
return at::mvlgamma_out(self, self, p);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know why the argument order is flipped compared to the function just below?

Copy link
Collaborator Author

@RohitRathore1 RohitRathore1 Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The argument order flipped because line 911 calls the public API at::mvlgamma_out(out, self, p) while line 914 defines the native implementation with signature (self, p, result). These are two different functions i.e., the public API wrapper vs the native implementation. According to the generated header, the public API is at::Tensor & mvlgamma_out(at::Tensor & out, const at::Tensor & self, int64_t p). So the call at::mvlgamma_out(self, self, p) correctly maps to (out=self, self=self, p=p).

@RohitRathore1
Copy link
Collaborator Author

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased fix-mvlgamma-int-fpe onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout fix-mvlgamma-int-fpe && git pull --rebase)

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the details. That sounds good!
Lint needs fixing but then you can merge!

@RohitRathore1
Copy link
Collaborator Author

hi @albanD please can you help me to understand how can I resolve such type of linting failures? https://github.com/pytorch/pytorch/actions/runs/19327144562/job/55280965920?pr=164230

@RohitRathore1
Copy link
Collaborator Author

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased fix-mvlgamma-int-fpe onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout fix-mvlgamma-int-fpe && git pull --rebase)

@RohitRathore1
Copy link
Collaborator Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 14, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged open source topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Floating point exception in torch.Tensor.mvlgamma_

5 participants