-
Notifications
You must be signed in to change notification settings - Fork 248
[Observers] Small observers cleanup, add e2e quantization tests #1830
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request primarily focuses on strengthening the quantization observer mechanisms and expanding test coverage. It refines the logic for calculating min-max values and global parameters within observers, ensuring more robust and predictable behavior. Crucially, it adds extensive end-to-end tests for different quantization strategies, which will serve as a vital safeguard for future modifications and improvements to the quantization framework.
Highlights
- Observer Logic Refinements: Enhanced
calculate_updated_min_max
to consistently store running values and updatedcalculate_gparam
to usepatch_attr
for safe, temporary state modification, preventing unintended side effects on running means. - Dimension Handling in Observers: Improved
get_qparams_along_dim
to support multiple and negative dimensions, increasing flexibility for quantization parameter calculation. - Comprehensive Quantization E2E Tests: Introduced new end-to-end tests for various quantization strategies (tensor, channel, group, block, token) covering both weight and activation quantization, significantly boosting confidence in the quantization pipeline.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands on the current page.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces valuable cleanups to the quantization observers and adds comprehensive end-to-end tests. The observer changes, such as making calculate_gparam
side-effect-free and enhancing get_qparams_along_dim
, are solid improvements. The new tests significantly increase confidence in the quantization logic for future refactoring. I've included a few suggestions to enhance the new test files by removing debugging print statements and commented-out code, which will improve overall maintainability. Great work on strengthening both the implementation and test coverage.
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we should update the mse observer as well? Since reset was removed from calibration.
@shanjiaz global parameters are not supported by MSE observer anyways, so it's not relevant without that support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
dim = set(dim) | ||
|
||
# convert negative dims | ||
dim = [d if d >= 0 else observed.ndim + d for d in dim] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't the cast to set happen after this line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically either is fine, since the argument type just needs to be an iterable. I'm purely matching the implementation on the base model for now
Update get_qparams_along_dim to support multiple dims and negative dims
This actually results in a silent typing bug with token quantization, and is fixed on the base class implementation
This change essentially duplicates the base class implementation. Future work could involve cleaning up the inheritance structure here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean more that you might end up with duplicates in dim
if you create this list and don't cast back to a set.
e.g. if there are 3 dims and dim={1,2,-1}
, then dim=[1,2,2]
after this line.
updated_min_val, updated_max_val = self.calculate_updated_min_max( | ||
observed=observed | ||
) | ||
# patch to avoid affecting running means |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is because we are calculating global scale right? we don't want the calculate_qparams
result to change based on this calculation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update calculate_gparam to restore original running values, rather than relying on resetting after calculation
Yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this preferable? If anything, this now seems more confusing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a programming standpoint, this decouples calculate_gparam
and Observer.reset
(there's no way to footgun yourself by calling calculate_gparam
and forgetting to call Observer.reset
.
From a functionality standpoint, I think this fixes a bug where metrics would be updated twice (which has implications for running values), specifically when called from calibrate_activations
. In the case of activations, we don't want to reset after each gparam calculation, since we still need those metrics to compute running values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right about the 2nd point.
I don't know if I agree with the first point. This feels like a hack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Case 1
Consider the case of strategy="tensor_group"
, dynamic=False
and averaging_constant != 1
.
On activation hook, calibrate_activations
calls call_observer
with should_calculate_gparam=True
*. This causes calculate_updated_min_max
to be called twice, which causes the running min/max to move faster than if no global param was calculated.
Case 2
Consider the case of strategy="tensor_group"
, dynamic="local"
and averaging_constant != 1
.
Originally, calculate_gparam
would call calculate_updated_min_max
would be called and the running values would update (twice*). Now, the running values will not update.
* Note that running values are updated, even if should_calculate_qparams=False
TLDR
So it seems that this change fixes a bug where running values are updated twice, but changes the behavior of dynamic="local"
to calculate global parameters based on true values, not running values. I assumed that global parameters should be the true min/max of all values, not running values, but maybe @dsikka you think this shouldn't be the case?
I've reverted the change since it's not necessary for group quant, but we should definitely look into exactly the behavior we want for global scales (and scales in general. Runnings means are slightly strange anyways and seem to be a vestige of QAT).
should_calculate_gparam=True, | ||
should_calculate_qparams=False, | ||
) | ||
module.weight_observer.reset() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we only attach one observer, I’m fairly sure we’re resetting to prevent global scale metrics from impacting quant scale metrics
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update calculate_gparam to restore original running values, rather than relying on resetting after calculation
Reset is replaced by patching and restoring the metrics
Co-authored-by: Brian Dellabetta <brian-dellabetta@users.noreply.github.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Purpose
Changes
calculate_updated_min_max
to store running values, even if the function computation shortcuts (helpful testing)get_qparams_along_dim
to support multiple dims and negative dimsTesting