-
Notifications
You must be signed in to change notification settings - Fork 683
Remove reduce_range as it is not relevant for HTP #14559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: To save GPU memory `bfloat16` dtype is commonly used for training of LLMs. Currently, the quantizer ignores quantizing the nodes if they are not float32. This change enables quantization of bf16 nodes as well. Differential Revision: D82866443
Summary: `reduce_range=True` reduces the available bit width by 1, in cases where quant_min, quant_max are not provided. It was originally intended for intel `fbgemm` kernels but I don't think this quantization setting is relevant for HTP. Also, PTQ quantization config doesn't use it, so removing it in all the QAT configs. This helped improve the QAT model quality. Differential Revision: D82867843
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14559
Note: Links to docs will display an error until the docs builds have been completed. ❌ 5 New Failures, 1 Pending, 5 Unrelated FailuresAs of commit 07e6a73 with merge base b3f3111 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@haowhsu-quic can u help review this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stamp on behalf of qcom's team review
Summary:
reduce_range=True
reduces the available bit width by 1, in cases where quant_min, quant_max are not provided. It was originally intended for intelfbgemm
kernels but I don't think this quantization setting is relevant for HTP.Also, PTQ quantization config doesn't use it, so removing it in all the QAT configs. This helped improve the QAT model quality.
Differential Revision: D82867843