Skip to content

Conversation

lisjin
Copy link
Contributor

@lisjin lisjin commented Sep 16, 2025

Follow-up to QuantOptimizer.torchao_convert added in #2947:

  1. Use the "_default" key for the dict passed to ModuleFqnToConfig instead of listing all linear weight names.
  2. Fix unwanted quantization of tied weights when using "_default". Some HF models have a _tied_weights_keys variable (e.g., in Qwen3Model).
    i. Suppose we tie lm_head.weight with model.embed_tokens.weight. Even though they're tied, their names both show up in model.named_parameters(). This is a problem since HF's quantization_config will try to apply the "_default" config to the tied LM head, which may differ from the embedding's. @metascroy This is a major issue for our mixed quantization of embeddings and LM head.
  3. @jerryzh168 Remove StretchedAffineQuantizedTensor to align with deprecated version=1 in torchao configs.

@lisjin lisjin added the topic: bug fix Use this tag for PRs that fix bugs label Sep 16, 2025
Copy link

pytorch-bot bot commented Sep 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3015

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 3516435 with merge base 9a770a5 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 16, 2025
@metascroy
Copy link
Contributor

This is a problem since HF's quantization_config will try to apply the "_default" config to the tied LM head, which may differ from the embedding's

Can you say more here with an example code snippet?

@lisjin
Copy link
Contributor Author

lisjin commented Sep 17, 2025

This is a problem since HF's quantization_config will try to apply the "_default" config to the tied LM head, which may differ from the embedding's

Can you say more here with an example code snippet?

@metascroy Sure, here is where HF applies quantize_ given a "_default" key. Their filter_fn doesn't doesn't check for tied parameters, so it would apply the default config to LM head tied to the embeddings.

@lisjin lisjin merged commit 122b307 into main Sep 18, 2025
20 checks passed
@lisjin lisjin deleted the lvj/fix-parq-convert branch September 18, 2025 00:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: bug fix Use this tag for PRs that fix bugs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants