Skip to content

[ET-VK][matmul] Re-implement fp32/fp16 matmul and linear with tiled compute and blocked weight packing#18174

Closed
SS-JIA wants to merge 1 commit intogh/SS-JIA/489/basefrom
gh/SS-JIA/489/head
Closed

[ET-VK][matmul] Re-implement fp32/fp16 matmul and linear with tiled compute and blocked weight packing#18174
SS-JIA wants to merge 1 commit intogh/SS-JIA/489/basefrom
gh/SS-JIA/489/head

Conversation

@SS-JIA
Copy link
Copy Markdown
Contributor

@SS-JIA SS-JIA commented Mar 13, 2026

Stack from ghstack (oldest at bottom):

Replace all existing matmul/linear operator implementations with new ones built
from the ground up using a tiled compute approach. Delete all legacy
implementations (MatMulLegacy.cpp, LinearLegacy.cpp, addmm_optimized.glsl,
addmm_naive_*.glsl).

New matmul (mm/bmm/addmm):

  • Single matmul.glsl shader handles mm, bmm, and addmm using FPInputTile,
    FPWeightTile, FPOutTile infrastructure from SDPA
  • Adaptive tile size selection (TILE_M=4/2/1) based on GPU occupancy
  • When mat2 is a constant tensor, automatically routes through the linear
    path for blocked weight packing

New linear:

  • Custom 4OC×4IC blocked weight prepacking via pack_fp_linear_weight.glsl
    for optimal cache line utilization during tiled matmul
  • Supports both transposed [N,K] and non-transposed [K,N] weights with
    batch dimension support
  • Separate texture2d weight storage with automatic buffer fallback for
    large dimensions

Performance on Adreno 750 (fp16, vs legacy):

  • Linear [4096,1024]x[256,1024]: 1.33x faster (texture)
  • Linear [4096,64]x[128,64]: 2.67x faster (texture)
  • BMM [1,4096,256]x[1,256,1024]: 1.63x faster (texture)

Differential Revision: D96488384

…ompute and blocked weight packing

Replace all existing matmul/linear operator implementations with new ones built
from the ground up using a tiled compute approach. Delete all legacy
implementations (MatMulLegacy.cpp, LinearLegacy.cpp, addmm_optimized.glsl,
addmm_naive_*.glsl).

New matmul (mm/bmm/addmm):
- Single matmul.glsl shader handles mm, bmm, and addmm using FPInputTile,
  FPWeightTile, FPOutTile infrastructure from SDPA
- Adaptive tile size selection (TILE_M=4/2/1) based on GPU occupancy
- When mat2 is a constant tensor, automatically routes through the linear
  path for blocked weight packing

New linear:
- Custom 4OC×4IC blocked weight prepacking via pack_fp_linear_weight.glsl
  for optimal cache line utilization during tiled matmul
- Supports both transposed [N,K] and non-transposed [K,N] weights with
  batch dimension support
- Separate texture2d weight storage with automatic buffer fallback for
  large dimensions

Performance on Adreno 750 (fp16, vs legacy):
- Linear [4096,1024]x[256,1024]: 1.33x faster (texture)
- Linear [4096,64]x[128,64]: 2.67x faster (texture)
- BMM [1,4096,256]x[1,256,1024]: 1.63x faster (texture)

Differential Revision: [D96488384](https://our.internmc.facebook.com/intern/diff/D96488384/)

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Mar 13, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18174

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

SS-JIA pushed a commit that referenced this pull request Mar 13, 2026
…ompute and blocked weight packing

Replace all existing matmul/linear operator implementations with new ones built
from the ground up using a tiled compute approach. Delete all legacy
implementations (MatMulLegacy.cpp, LinearLegacy.cpp, addmm_optimized.glsl,
addmm_naive_*.glsl).

New matmul (mm/bmm/addmm):
- Single matmul.glsl shader handles mm, bmm, and addmm using FPInputTile,
  FPWeightTile, FPOutTile infrastructure from SDPA
- Adaptive tile size selection (TILE_M=4/2/1) based on GPU occupancy
- When mat2 is a constant tensor, automatically routes through the linear
  path for blocked weight packing

New linear:
- Custom 4OC×4IC blocked weight prepacking via pack_fp_linear_weight.glsl
  for optimal cache line utilization during tiled matmul
- Supports both transposed [N,K] and non-transposed [K,N] weights with
  batch dimension support
- Separate texture2d weight storage with automatic buffer fallback for
  large dimensions

Performance on Adreno 750 (fp16, vs legacy):
- Linear [4096,1024]x[256,1024]: 1.33x faster (texture)
- Linear [4096,64]x[128,64]: 2.67x faster (texture)
- BMM [1,4096,256]x[1,256,1024]: 1.63x faster (texture)

Differential Revision: [D96488384](https://our.internmc.facebook.com/intern/diff/D96488384/)

ghstack-source-id: 351937790
Pull Request resolved: #18174
@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 13, 2026
@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@SS-JIA SS-JIA closed this Mar 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant