Skip to content

Conversation

DrJessop
Copy link
Contributor

@DrJessop DrJessop commented Sep 8, 2025

Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579

Copy link

pytorch-bot bot commented Sep 8, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14093

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 92 Pending

As of commit a526b61 with merge base 0b4fe31 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 8, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81950579

Copy link

github-actions bot commented Sep 8, 2025

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 8, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 8, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 9, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 9, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 9, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 9, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81950579

DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 9, 2025
…nt (pytorch#14093)

Summary:
Pull Request resolved: pytorch#14093

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 9, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 9, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 9, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81950579

DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 9, 2025
…nt (pytorch#14093)

Summary:
Pull Request resolved: pytorch#14093

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 10, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 10, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 10, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81950579

DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 10, 2025
…nt (pytorch#14093)

Summary:
Pull Request resolved: pytorch#14093

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 10, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 10, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81950579

DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 10, 2025
…nt (pytorch#14093)

Summary:
Pull Request resolved: pytorch#14093

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 11, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 11, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 11, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81950579

DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 11, 2025
…nt (pytorch#14093)

Summary:
Pull Request resolved: pytorch#14093

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
@DrJessop DrJessop force-pushed the export-D81950579 branch 2 times, most recently from 57b1705 to be6c7e8 Compare September 11, 2025 16:57
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 11, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 11, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 11, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D81950579

DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 11, 2025
…nt (pytorch#14093)

Summary:
Pull Request resolved: pytorch#14093

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
@DrJessop DrJessop force-pushed the export-D81950579 branch 2 times, most recently from 679d685 to d9c1ad4 Compare September 13, 2025 23:46
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 13, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
@facebook-github-bot
Copy link
Contributor

@DrJessop has exported this pull request. If you are a Meta employee, you can view the originating diff in D81950579.

DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 13, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
DrJessop pushed a commit to DrJessop/executorch that referenced this pull request Sep 13, 2025
…nt (pytorch#14093)

Summary:

As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.

Reviewed By: hsharma35

Differential Revision: D81950579
@facebook-github-bot
Copy link
Contributor

@DrJessop has exported this pull request. If you are a Meta employee, you can view the originating diff in D81950579.

@facebook-github-bot facebook-github-bot merged commit 79c8e49 into pytorch:main Sep 14, 2025
124 of 127 checks passed
StrycekSimon pushed a commit to nxp-upstream/executorch that referenced this pull request Sep 23, 2025
Differential Revision: D81950579

Pull Request resolved: pytorch#14093
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants