Align _choose_qparams_affine with _choose_scale_float8 behavior#3447
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3447
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 8a7b789 with merge base aa25287 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Thanks, I think it's a good start, we can remove |
|
I see 25 integration tests failed due to backward compatibility issues with the |
|
it's expected, I think maybe just don't change the default for now, but turn the keepdim to True in these tests one by one to make sure these tests are fixed, and alls the callsites are fixed before making the switch would be better |
|
Really sorry, had a super busy schedule |
|
@jerryzh168 could you please run the CI, made the required changes! |
|
@jerryzh168 has imported this pull request. If you are a Meta employee, you can view this in D89579365. |
|
@jerryzh168 could you please let me know what further steps to take! |
|
@AryanBagade https://github.com/pytorch/ao/actions/runs/20121850704/job/57745005031?pr=3447 is failing but likely is fixed in main already, can you rebase on main branch? I have imported and confirmed that this change doesn't break any of the internal test, so we should be able to merge after you rebase |
| and `zero_point_domain` | ||
|
|
||
| Note: | ||
| keepdim defaults to True to align with _choose_scale_float8 behavior. This ensures |
There was a problem hiding this comment.
right now it's defaults to False, so should change the comment here
Changes keepdim default from False to True in _choose_qparams_affine to match _choose_scale_float8 behavior. This ensures scale/zero_point maintain the same rank as input tensor, making downstream handling more consistent. Fixes pytorch#3324
Signed-off-by: Aryan Bagade <aryan@aryanbagade.com>
a56f742 to
8a7b789
Compare
|
@jerryzh168 Rebased on main and fixed the docstring comment as requested. Ready for re-import and merge! |
| and `zero_point_domain` | ||
|
|
||
| Note: | ||
| Set keepdim=True to align with _choose_scale_float8 behavior. This ensures |
There was a problem hiding this comment.
please change this to False, thanks
There was a problem hiding this comment.
oh sorry, seems OK, it's not talking about default
Changes keepdim default from False to True in _choose_qparams_affine to match _choose_scale_float8 behavior. This ensures scale/zero_point maintain the same rank as input tensor, making downstream handling more consistent.
Part 1 of fixing #3324
Changes
Core Changes (
torchao/quantization/quant_primitives.py)keepdim: bool = False→keepdim: bool = Truein bothchoose_qparams_affine(line 1220) and_choose_qparams_affine(line 1526)_choose_scale_float8behaviororiginal_input_sizebefore reshaping to compute correct output shape_choose_scale_float8Workflow Simplification (
torchao/quantization/quantize_/workflows/intx/intx_unpacked_to_int8_tensor.py)Test Updates(
test/quantization/test_quant_primitives.py)test_choose_qparamstests now pass