Skip to content

Tongliu/router fusion #1883

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

Autumn1998
Copy link
Contributor

Description

Provide function used in the router fusion and corresponding unit-test.

  1. fuse the topk+softmax/sigmoid
  2. fuse the score function used in the aux loss
  3. fuse the aux loss computation

All 3 parts include the forward and backward.

Fixes # (issue)

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Change A
  • Change B

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Signed-off-by: tongliu <tongliu@nvidia.com>
@Autumn1998 Autumn1998 force-pushed the tongliu/router_fusion branch from 60d0142 to b3c3633 Compare June 16, 2025 09:12
pre-commit-ci bot and others added 3 commits June 16, 2025 09:13
Signed-off-by: tongliu <tongliu@nvidia.com>
Signed-off-by: tongliu <tongliu@nvidia.com>
@Autumn1998 Autumn1998 force-pushed the tongliu/router_fusion branch from 4377f92 to 752c351 Compare June 16, 2025 12:20
expert_bias=expert_bias_clone,
)

assert torch.allclose(probs, probs_fused, atol=atol, rtol=rtol)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nicer to replace torch.allclose with torch.testing.assert_close. It has a helpful error message and it automatically chooses the tols based on the dtype.

Suggested change
assert torch.allclose(probs, probs_fused, atol=atol, rtol=rtol)
torch.testing.assert_close(probs, probs_fused)

Comment on lines +222 to +223
@pytest.mark.parametrize("dtype", [torch.float32])
@pytest.mark.parametrize("num_tokens", [2048, 7168, 32111])
@pytest.mark.parametrize("num_experts", [128, 32])
@pytest.mark.parametrize("topk", [4, 8])
@pytest.mark.parametrize("group_topk", [None, 4])
@pytest.mark.parametrize("scaling_factor", [None, 1.2])
@pytest.mark.parametrize("enable_bias", [True, False])
def test_topk_sigmoid(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How long does this test suite take to run? The number of test cases grows very quickly if you have one test with many parameters (O(2^n) cases), so it may be better to split it up into multiple tests with only a few parameters (O(2*n) cases).

* \param[in] intermediate_output Intermediate output from the forward pass. (Softmax/sigmoid output)
* \param[in] stream CUDA stream used for the operation.
*/
void nvte_fused_scores_for_aux_loss_forward(const NVTETensor logits, int num_tokens,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the naming convention for this loss function in other MoE implementations? It feels like aux_loss is too general and it might get confusing if some other multi-objective training method becomes popular in the future. Maybe something like moe_aux_loss would be more specific.

@timmoon10 timmoon10 self-requested a review June 19, 2025 01:37
@timmoon10
Copy link
Collaborator

/te-ci pytorch

Signed-off-by: tongliu <tongliu@nvidia.com>
@Autumn1998 Autumn1998 force-pushed the tongliu/router_fusion branch from d3197e0 to baed11c Compare June 19, 2025 07:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants