Skip to content

Conversation

kylesayrs
Copy link
Collaborator

@kylesayrs kylesayrs commented Sep 17, 2025

Purpose

  • Add tests to quantization in order to add more confidence to future quantization refactors
    • Each of the tested values has been checked and look correct

Changes

  • Observers changes
    • Update calculate_updated_min_max to store running values, even if the function computation shortcuts (helpful testing)
    • Update get_qparams_along_dim to support multiple dims and negative dims
      • This actually results in a silent typing bug with token quantization, and is fixed on the base class implementation
      • This change essentially duplicates the base class implementation. Future work could involve cleaning up the inheritance structure here

Testing

  • Added tests pass

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on strengthening the quantization observer mechanisms and expanding test coverage. It refines the logic for calculating min-max values and global parameters within observers, ensuring more robust and predictable behavior. Crucially, it adds extensive end-to-end tests for different quantization strategies, which will serve as a vital safeguard for future modifications and improvements to the quantization framework.

Highlights

  • Observer Logic Refinements: Enhanced calculate_updated_min_max to consistently store running values and updated calculate_gparam to use patch_attr for safe, temporary state modification, preventing unintended side effects on running means.
  • Dimension Handling in Observers: Improved get_qparams_along_dim to support multiple and negative dimensions, increasing flexibility for quantization parameter calculation.
  • Comprehensive Quantization E2E Tests: Introduced new end-to-end tests for various quantization strategies (tensor, channel, group, block, token) covering both weight and activation quantization, significantly boosting confidence in the quantization pipeline.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces valuable cleanups to the quantization observers and adds comprehensive end-to-end tests. The observer changes, such as making calculate_gparam side-effect-free and enhancing get_qparams_along_dim, are solid improvements. The new tests significantly increase confidence in the quantization logic for future refactoring. I've included a few suggestions to enhance the new test files by removing debugging print statements and commented-out code, which will improve overall maintainability. Great work on strengthening both the implementation and test coverage.

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Copy link
Collaborator

@shanjiaz shanjiaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we should update the mse observer as well? Since reset was removed from calibration.

@kylesayrs
Copy link
Collaborator Author

@shanjiaz global parameters are not supported by MSE observer anyways, so it's not relevant without that support

shanjiaz
shanjiaz previously approved these changes Sep 17, 2025
Copy link
Collaborator

@shanjiaz shanjiaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@kylesayrs kylesayrs changed the title [Observers] Small observers cleanup, add e2e quantization tests [Observers] Small observers cleanup, add e2e quantization tests, increase test file limit Sep 17, 2025
dim = set(dim)

# convert negative dims
dim = [d if d >= 0 else observed.ndim + d for d in dim]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the cast to set happen after this line?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically either is fine, since the argument type just needs to be an iterable. I'm purely matching the implementation on the base model for now

Update get_qparams_along_dim to support multiple dims and negative dims
This actually results in a silent typing bug with token quantization, and is fixed on the base class implementation
This change essentially duplicates the base class implementation. Future work could involve cleaning up the inheritance structure here

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean more that you might end up with duplicates in dim if you create this list and don't cast back to a set.

e.g. if there are 3 dims and dim={1,2,-1}, then dim=[1,2,2] after this line.

updated_min_val, updated_max_val = self.calculate_updated_min_max(
observed=observed
)
# patch to avoid affecting running means
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is because we are calculating global scale right? we don't want the calculate_qparams result to change based on this calculation?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update calculate_gparam to restore original running values, rather than relying on resetting after calculation

Yes

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this preferable? If anything, this now seems more confusing

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a programming standpoint, this decouples calculate_gparam and Observer.reset (there's no way to footgun yourself by calling calculate_gparam and forgetting to call Observer.reset.

From a functionality standpoint, I think this fixes a bug where metrics would be updated twice (which has implications for running values), specifically when called from calibrate_activations. In the case of activations, we don't want to reset after each gparam calculation, since we still need those metrics to compute running values.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're right about the 2nd point.

I don't know if I agree with the first point. This feels like a hack.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Case 1

Consider the case of strategy="tensor_group", dynamic=False and averaging_constant != 1.

On activation hook, calibrate_activations calls call_observer with should_calculate_gparam=True*. This causes calculate_updated_min_max to be called twice, which causes the running min/max to move faster than if no global param was calculated.

Case 2

Consider the case of strategy="tensor_group", dynamic="local" and averaging_constant != 1.

Originally, calculate_gparam would call calculate_updated_min_max would be called and the running values would update (twice*). Now, the running values will not update.

* Note that running values are updated, even if should_calculate_qparams=False

TLDR

So it seems that this change fixes a bug where running values are updated twice, but changes the behavior of dynamic="local" to calculate global parameters based on true values, not running values. I assumed that global parameters should be the true min/max of all values, not running values, but maybe @dsikka you think this shouldn't be the case?

I've reverted the change since it's not necessary for group quant, but we should definitely look into exactly the behavior we want for global scales (and scales in general. Runnings means are slightly strange anyways and seem to be a vestige of QAT).

should_calculate_gparam=True,
should_calculate_qparams=False,
)
module.weight_observer.reset()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we only attach one observer, I’m fairly sure we’re resetting to prevent global scale metrics from impacting quant scale metrics

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update calculate_gparam to restore original running values, rather than relying on resetting after calculation

Reset is replaced by patching and restoring the metrics

kylesayrs and others added 2 commits September 18, 2025 10:45
Co-authored-by: Brian Dellabetta <brian-dellabetta@users.noreply.github.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@kylesayrs kylesayrs changed the title [Observers] Small observers cleanup, add e2e quantization tests, increase test file limit [Observers] Small observers cleanup, add e2e quantization tests Sep 18, 2025
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants