Skip to content

ggml webgpu: fix workgroup dispatch limit for large batch sizes#19965

Merged
reeselevine merged 5 commits intoggml-org:masterfrom
abhijitramesh:abhijit/webgpu-matmul-workgroup-limit
Mar 3, 2026
Merged

ggml webgpu: fix workgroup dispatch limit for large batch sizes#19965
reeselevine merged 5 commits intoggml-org:masterfrom
abhijitramesh:abhijit/webgpu-matmul-workgroup-limit

Conversation

@abhijitramesh
Copy link
Contributor

WebGPU limits workgroup counts to 65535 per dimension. MUL_MAT operations with batch sizes exceeding this limit would fail or corrupt memory.

This PR implements 2D workgroup dispatch to handle arbitrary batch sizes:

  • Adds compute_2d_workgroups() helper to split workgroups across X/Y dimensions when exceeding the 65535 limit
  • Updates mul_mat shaders to reconstruct linear workgroup ID from 2D dispatch coordinates
  • Adds bounds checking in shaders to handle over-dispatched workgroups (inevitable with rectangular 2D grids)

WebGPU limits workgroup sizes to 65535 per dimension. Large MUL_MAT
operations with batch sizes exceedeing this limi would fail.

* add compute_2d_workgroups() helper to split total workgroup ID across
X/Y dimensions

* update mul_mat_reg_tile.wgsl to reconstruct linear workgroup ID from 2D
   dispatch

* update mul_mat_subgroup_matrix.wgsl to reconstruct linear workgroup ID
  from 2D dispatch

* update mul_mat.wgsl to compute global index from 2D workgroup
  coordinates

* refactor all three mul_mat dispatch paths to use the shared helper
2D workgroup dispatch can over-dispatch when total workgroups don't
divide evenly into the 65535 per-dimension limit. Extra workgroups
would compute invalid batch indices, causing memory corruption.

* add batch_idx bound check to mul_mat_reg_tile.wgsl and
mul_mat_subgroup_matrix.wgsl to prevent over-dispatched workgroups
from accessing invalid memory

* fixes test failures with large batch sizes (eg., bs=[128, 1024])
@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Feb 28, 2026
Copy link
Collaborator

@reeselevine reeselevine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glad this turned out to be a simple fix! We should soon implement a way to avoid launching too many workgroups in these cases too, since Safari does seem to limit the allowed number more aggressively than Chrome.

Copy link
Collaborator

@reeselevine reeselevine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The failing WebGPU CI is unrelated to this change, it's an issue introduced in #19772, which @nikhilJain17 is working on fixing separately.

@reeselevine reeselevine merged commit 49a7564 into ggml-org:master Mar 3, 2026
74 of 78 checks passed
ArberSephirotheca pushed a commit to ArberSephirotheca/llama.cpp that referenced this pull request Mar 3, 2026
…-org#19965)

* ggml-webgpu: fix workgroup dispatch limit for large batch sizes

WebGPU limits workgroup sizes to 65535 per dimension. Large MUL_MAT
operations with batch sizes exceedeing this limi would fail.

* add compute_2d_workgroups() helper to split total workgroup ID across
X/Y dimensions

* update mul_mat_reg_tile.wgsl to reconstruct linear workgroup ID from 2D
   dispatch

* update mul_mat_subgroup_matrix.wgsl to reconstruct linear workgroup ID
  from 2D dispatch

* update mul_mat.wgsl to compute global index from 2D workgroup
  coordinates

* refactor all three mul_mat dispatch paths to use the shared helper

* ggml-webgpu: add bounds checking for over-dispatched workgroups

2D workgroup dispatch can over-dispatch when total workgroups don't
divide evenly into the 65535 per-dimension limit. Extra workgroups
would compute invalid batch indices, causing memory corruption.

* add batch_idx bound check to mul_mat_reg_tile.wgsl and
mul_mat_subgroup_matrix.wgsl to prevent over-dispatched workgroups
from accessing invalid memory

* fixes test failures with large batch sizes (eg., bs=[128, 1024])

* ggml-webgpu: add back TODO for spliting large sizes into batches

* Optimize 2d workgroup provisioning

* Set some parameters that increase speed

---------

Co-authored-by: Reese Levine <reeselevine1@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants