forked from pytorch/pytorch
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[Quant] [PT2] Enable Decomposed quant per tensor/channel to accept bf…
…loat16 input (pytorch#112225) **Summary** - PR 4 for enabling Int8-Mixed-BF16 PT2E PTQ Quantization with Inductor pytorch#111640. - Enable `decomposed quant_per_tensor` and `quant_per_channel` accepts bfloat16 input. **TestPlan** ``` python -m pytest test_quantized_tensor.py -k test_decomposed_quantize_per_tensor_bfloat16_input python -m pytest test_quantized_tensor.py -k test_decomposed_quantize_per_channel_bfloat16_input ``` Pull Request resolved: pytorch#112225 Approved by: https://github.com/jgong5, https://github.com/jerryzh168
- Loading branch information
1 parent
2f47b8f
commit 270c356
Showing
3 changed files
with
42 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters