Allow non-tensor kwargs in prepare_pt2e#3642
Merged
jerryzh168 merged 5 commits intopytorch:mainfrom Jan 23, 2026
Merged
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3642
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 6075f90 with merge base 23143f5 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
jerryzh168
reviewed
Jan 15, 2026
jerryzh168
reviewed
Jan 15, 2026
jerryzh168
reviewed
Jan 15, 2026
Some ops were already permitted, and what the assert actually was trying to guard was not silently skipping quantizing kwargs. This is only relevant for Tensor kwargs. There has been at least two discussions on this issue pytorch#2146 pytorch/pytorch#146621, and it has caused a follow-up bug in Executorch: pytorch/executorch#16541 Signed-off-by: Erik Lundell <erik.lundell@arm.com>
609bced to
2b2571c
Compare
jerryzh168
reviewed
Jan 16, 2026
Signed-off-by: Erik Lundell <erik.lundell@arm.com>
jerryzh168
reviewed
Jan 20, 2026
- Sequence triggered for strings, causing infinite recursion as each element in a string of length 1 is also a string of length 1. - Add case for dict kwargs. Signed-off-by: Erik Lundell <erik.lundell@arm.com>
Contributor
|
@JacobSzwejbka has imported this pull request. If you are a Meta employee, you can view this in D91152284. |
Remove unused import of Sequence from collections.abc
Attempting to quantize empty_like caused flaky failures due to attempting to calculate qparams for only 0's
Contributor
Author
That's an annoying thing to miss, fixed it. Also addressed an issue causing flakiness. Slowly getting there :) |
JacobSzwejbka
pushed a commit
to pytorch/executorch
that referenced
this pull request
Jan 26, 2026
To include pytorch/ao#3642 cc @freddan80 @per @zingo @oscarandersson8218 @digantdesai Signed-off-by: Erik Lundell <erik.lundell@arm.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add a list of permitted kwargs in
_maybe_insert_input_observers_for_node:
device, pin_memory, and memory_format.
Some ops are already permitted, showing that
the assert is not a hard limit. The kwargs
in question should not affect or be affected
by quantization, and there is no clear reason
for why all kwargs should be disallowed.
There has been at least two discussions on this issue #2146
pytorch/pytorch#146621,
and it has caused a follow-up bug in Executorch:
pytorch/executorch#16541