Skip to content

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Sep 16, 2025

Stack from ghstack (oldest at bottom):

Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.

Differential Revision: D82542335

## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Sep 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14330

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures, 1 Cancelled Job, 3 Unrelated Failures

As of commit ffec699 with merge base c18abc8 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following jobs failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

SS-JIA pushed a commit that referenced this pull request Sep 16, 2025
## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)

ghstack-source-id: 309992179
Pull Request resolved: #14330
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 16, 2025
@facebook-github-bot
Copy link
Contributor

@SS-JIA has exported this pull request. If you are a Meta employee, you can view the originating diff in D82542335.

Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

…ons"

## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Sep 16, 2025
Pull Request resolved: #14330

## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)
ghstack-source-id: 310013406
@facebook-github-bot
Copy link
Contributor

@SS-JIA has exported this pull request. If you are a Meta employee, you can view the originating diff in D82542335.

…ons"

## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Sep 17, 2025
Pull Request resolved: #14330

## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.
ghstack-source-id: 310277511

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)
@facebook-github-bot
Copy link
Contributor

@SS-JIA has exported this pull request. If you are a Meta employee, you can view the originating diff in D82542335.

…ons"

## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Sep 17, 2025
Pull Request resolved: #14330

## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.
ghstack-source-id: 310286204

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)
@facebook-github-bot
Copy link
Contributor

@SS-JIA has exported this pull request. If you are a Meta employee, you can view the originating diff in D82542335.

…ons"

## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)

[ghstack-poisoned]
SS-JIA pushed a commit that referenced this pull request Sep 25, 2025
Pull Request resolved: #14330

## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.
ghstack-source-id: 312106550

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)
@facebook-github-bot
Copy link
Contributor

@SS-JIA has exported this pull request. If you are a Meta employee, you can view the originating diff in D82542335.

@facebook-github-bot facebook-github-bot merged commit 55b45d8 into gh/SS-JIA/330/base Sep 25, 2025
123 of 132 checks passed
@facebook-github-bot facebook-github-bot deleted the gh/SS-JIA/330/head branch September 25, 2025 20:04
SS-JIA pushed a commit that referenced this pull request Sep 25, 2025
Pull Request resolved: #14330

## Context

As title. Add shaders to quantize a floating point conv2d input tensor to packed int8 memory layout and dequantize a int8 conv2d output tensor back to floating point representation.

Hooking it up to the export logic will be handled in a follow up diff.
ghstack-source-id: 312106550

Differential Revision: [D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)
SS-JIA pushed a commit that referenced this pull request Sep 25, 2025
This PR was created by the merge bot to help merge the original PR into
the main branch.
ghstack PR number: #14330 by
@SS-JIA
^ Please use this as the source of truth for the PR details, comments,
and reviews
ghstack PR base:
https://github.com/pytorch/executorch/tree/gh/SS-JIA/330/base
ghstack PR head:
https://github.com/pytorch/executorch/tree/gh/SS-JIA/330/head
Merge bot PR base:
https://github.com/pytorch/executorch/tree/gh/SS-JIA/331/orig
Merge bot PR head:
https://github.com/pytorch/executorch/tree/gh/SS-JIA/330/orig
Differential Revision:
[D82542335](https://our.internmc.facebook.com/intern/diff/D82542335/)
@diff-train-skip-merge

Co-authored-by: ssjia <ssjia@devvm1479.ncg0.facebook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants