Skip to content

Conversation

@kwen2501
Copy link
Contributor

We have two options for Group GEMM now:

Option 1: torch._group_mm

Option 2: grouped_gemm_forward (torchao)

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Apr 10, 2025
@kwen2501 kwen2501 requested a review from lessw2020 April 10, 2025 20:50
# 2. "symm_mem" (see `setup_symm_mem` below)
shuffle_method = "torch_all_to_all"
# Group GEMM method, "torch" or "torchao"
group_mm = "torch"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tiny nit but I like just having selectable options next to the item, like how we do it in the toml:

group_mm = "torch"  # ["torch", "torchao"]

Copy link
Contributor

@lessw2020 lessw2020 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, thanks for integrating this as another grouped gemm option!
note - the adamw aspect for torch._grouped_mm hasn't been resolved yet but hopefully before we get to the training loop.

@kwen2501 kwen2501 merged commit 5707c3d into main Apr 10, 2025
6 checks passed
tianyu-l added a commit that referenced this pull request Apr 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants