Skip to content

Conversation

3l1
Copy link
Contributor

@3l1 3l1 commented Oct 1, 2025

Summary:
Adjust op_bmm to allow int16 types with int48 output buffer

Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption.

Differential Revision: D83627934

@3l1 3l1 requested a review from digantdesai as a code owner October 1, 2025 05:10
Copy link

pytorch-bot bot commented Oct 1, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14714

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 6e5764c with merge base 53ccfd0 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 1, 2025
@facebook-github-bot
Copy link
Contributor

@3l1 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83627934.

@3l1
Copy link
Contributor Author

3l1 commented Oct 1, 2025

@pytorchbot label "release notes: none"

@pytorch-bot pytorch-bot bot added the release notes: none Do not include this in the release notes label Oct 1, 2025
@3l1 3l1 requested a review from gggekov October 1, 2025 05:16
3l1 added a commit that referenced this pull request Oct 1, 2025
Summary:

Adjust op_bmm to allow int16 types with int48 output buffer

Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption.

Differential Revision: D83627934
@3l1 3l1 force-pushed the export-D83627934 branch from eed7fe8 to 3691a40 Compare October 1, 2025 19:45
facebook-github-bot pushed a commit that referenced this pull request Oct 1, 2025
Summary:

Adjust op_bmm to allow int16 types with int48 output buffer

Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption.

Differential Revision: D83627934
facebook-github-bot pushed a commit that referenced this pull request Oct 2, 2025
Summary:

Adjust op_bmm to allow int16 types with int48 output buffer

Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption.

Reviewed By: digantdesai

Differential Revision: D83627934
facebook-github-bot pushed a commit that referenced this pull request Oct 3, 2025
Summary:

Adjust op_bmm to allow int16 types with int48 output buffer

Note: I am rescaling back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption.

Reviewed By: digantdesai

Differential Revision: D83627934
Summary:

Adjust op_bmm to allow int16 types with int48 output buffer

Note: I am rescaling outputs back to the original int16 dtype output. This is obviously dangerous if done without a properly calibrated quantization parameter, but this is our base assumption.


bypass-github-export-checks
bypass-github-pytorch-ci-checks
bypass-github-executorch-ci-checks

Reviewed By: digantdesai

Differential Revision: D83627934
@facebook-github-bot facebook-github-bot merged commit 822a711 into main Oct 3, 2025
133 checks passed
@facebook-github-bot facebook-github-bot deleted the export-D83627934 branch October 3, 2025 03:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported release notes: none Do not include this in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants