Fused quant bmm kernel (#19489)#19489
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19489
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 4 New Failures, 3 Unrelated FailuresAs of commit 8392933 with merge base 8020fe0 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@DrJessop has exported this pull request. If you are a Meta employee, you can view the originating Diff in D103754815. |
This PR needs a
|
Summary: Fused quant batch matrix multiply kernel with optional dequantize/quantize. Binary op on 3D tensors [B,M,K] x [B,K,N] -> [B,M,N]. Supports per-tensor and per-channel quantization. Reviewed By: mvartani-meta Differential Revision: D103754815
Summary: Fused quant batch matrix multiply kernel with optional dequantize/quantize. Binary op on 3D tensors [B,M,K] x [B,K,N] -> [B,M,N]. Supports per-tensor and per-channel quantization. Reviewed By: mvartani-meta Differential Revision: D103754815
762a9cb to
e5f8f21
Compare
Summary: Fused quant batch matrix multiply kernel with optional dequantize/quantize. Binary op on 3D tensors [B,M,K] x [B,K,N] -> [B,M,N]. Supports per-tensor and per-channel quantization. Reviewed By: mvartani-meta Differential Revision: D103754815
Summary: Fused quant batch matrix multiply kernel with optional dequantize/quantize. Binary op on 3D tensors [B,M,K] x [B,K,N] -> [B,M,N]. Supports per-tensor and per-channel quantization. Reviewed By: mvartani-meta Differential Revision: D103754815
Summary: Fused quant batch matrix multiply kernel with optional dequantize/quantize. Binary op on 3D tensors [B,M,K] x [B,K,N] -> [B,M,N]. Supports per-tensor and per-channel quantization. Reviewed By: mvartani-meta Differential Revision: D103754815
Summary: Fused quant batch matrix multiply kernel with optional dequantize/quantize. Binary op on 3D tensors [B,M,K] x [B,K,N] -> [B,M,N]. Supports per-tensor and per-channel quantization. Reviewed By: mvartani-meta Differential Revision: D103754815
Summary: Fused quant hardswish kernel with optional dequantize/quantize. Unary op that applies x * min(max(x+3, 0), 6) / 6. Supports per-tensor and per-channel quantization. Reviewed By: mvartani-meta Differential Revision: D103754780
Summary: Fused quant batch matrix multiply kernel with optional dequantize/quantize. Binary op on 3D tensors [B,M,K] x [B,K,N] -> [B,M,N]. Supports per-tensor and per-channel quantization. Reviewed By: mvartani-meta Differential Revision: D103754815
e5f8f21 to
8392933
Compare
Summary: Fused quant batch matrix multiply kernel with optional dequantize/quantize. Binary op on 3D tensors [B,M,K] x [B,K,N] -> [B,M,N]. Supports per-tensor and per-channel quantization. Reviewed By: mvartani-meta Differential Revision: D103754815
Summary:
Fused quant batch matrix multiply kernel with optional dequantize/quantize. Binary op on 3D tensors [B,M,K] x [B,K,N] -> [B,M,N]. Supports per-tensor and per-channel quantization.
Reviewed By: mvartani-meta
Differential Revision: D103754815