Skip to content

Precompute T1 offset for quantized conv2d NHWC in TIE kernel (#18960)#18960

Merged
meta-codesync[bot] merged 1 commit intopytorch:mainfrom
abeakkas:export-D100690813
Apr 22, 2026
Merged

Precompute T1 offset for quantized conv2d NHWC in TIE kernel (#18960)#18960
meta-codesync[bot] merged 1 commit intopytorch:mainfrom
abeakkas:export-D100690813

Conversation

@abeakkas
Copy link
Copy Markdown
Contributor

@abeakkas abeakkas commented Apr 16, 2026

Summary:

Move the zero-point correction term t1[oc] = -input_zero_point * sum(weight[oc]) from runtime (malloc + compute_t1_...DWH + free per inference) to compile time via a new PrecomputeForQuantizedConvPass, mirroring the existing linear pass. The precomputed offset is threaded through a new optional "offset" parameter on cadence::quantized_conv2d_nhwc.per_tensor (defaults to None for backwards compatibility). The now-dead compute_t1..._DWH functions are removed.

The TIE kernels assume the existence of the offset parameter similar to quantized_linear case.

Differential Revision: D100690813

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 16, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18960

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 16, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented Apr 16, 2026

@abeakkas has exported this pull request. If you are a Meta employee, you can view the originating Diff in D100690813.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

…#18960)

Summary:

Move the zero-point correction term `t1[oc] = -input_zero_point * sum(weight[oc])` from runtime (malloc + compute_t1_..._DWH + free per inference) to compile time via a new PrecomputeForQuantizedConvPass, mirroring the existing linear pass. The precomputed offset is threaded through a new optional "offset" parameter on cadence::quantized_conv2d_nhwc.per_tensor (defaults to None for backwards compatibility). The now-dead compute_t1_..._DWH functions are removed.

The TIE kernels assume the existence of the offset parameter similar to quantized_linear case.

Differential Revision: D100690813
@meta-codesync meta-codesync Bot changed the title Precompute T1 offset for quantized conv2d NHWC in TIE kernel Precompute T1 offset for quantized conv2d NHWC in TIE kernel (#18960) Apr 21, 2026
@abeakkas abeakkas force-pushed the export-D100690813 branch from 90e7476 to a687c43 Compare April 21, 2026 20:51
@meta-codesync meta-codesync Bot merged commit 89600b3 into pytorch:main Apr 22, 2026
168 of 173 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants