Skip to content

Conversation

metascroy
Copy link
Contributor

@metascroy metascroy commented Sep 24, 2025

This adds a new "torchao" backend for pre-quantized checkpoints.

Pre-quantized checkpoints can be lowered to a backend (e.g., XNNPACK) by specifying "-X" in etLLM.

With this PR, we can now lower pre-quantized checkpoints to torchao lowbit kernels by specifying "--torchao_kernels" in the export script instead of "-X". Note this will run both linear and tied_embedding kernels with torchao_kernels.

If you want to run linear with XNNPACK, but only run tied embedding with torchao, use "--torchao_kernels_tied_embedding" and "-X".

New CI tests are added for the flow.

Copy link

pytorch-bot bot commented Sep 24, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14545

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures, 2 Unrelated Failures

As of commit 96dc88e with merge base 9283b4e (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 24, 2025
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Comment on lines 461 to 462
convert_linear: bool = False
convert_tied_embedding: bool = False
Copy link
Contributor

@jerryzh168 jerryzh168 Sep 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: these feels like convert functions, maybe just use use_torchao_kernels_linear and use_torchao_kernels_tied_embedding?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

else:
# Otherwise, only enable the conversions that are specified
llm_config.backend.torchao.convert_linear = getattr(
args, "torchao_kernels_linear", False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can match the name here as well use_torchao_kernels_linear

args, "torchao_kernels_linear", False
)
llm_config.backend.torchao.convert_tied_embedding = getattr(
args, "torchao_kernels_tied_embedding", False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and here

@metascroy metascroy merged commit 5d29a7d into main Sep 30, 2025
417 of 423 checks passed
@metascroy metascroy deleted the add-torchao-convert branch September 30, 2025 17:09
Comment on lines +420 to +424
parser.add_argument(
"--use-torchao-kernels",
action="store_true",
help="Delegate tied-embedding and quantized linear ops to torchao kernels",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this when it's combining the below two args?

"""
Configures the torchao-kernels backend.
"""

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we follow the other backend config examples and use enabled?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants