Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[autoparallel] added bias comm spec to matmul strategy #1664

Merged

Conversation

FrankLeeeee
Copy link
Contributor

Problem

The previous PR #1616 considers all matmul sharding strategy generation, however, it failed to consider the the communication action for the bias term.

Solution

This bias term can only have one action which is IDENTITY_FWD_ALLREDUCE_BWD to make sure the gradient is synchronized. The mesh dimensions for all reduce depends on the sharding spec of the output. For example:

  1. if the output is two dimensional and has sharding spec RS, then bias also has sharding spec S and thus has no all reduce oepratin.
  2. if the output is SS, then the bias requires an allreduce in the backward to sync gradient.

This PR implemented these strategies for the bias term.

Note

The bias term in ops like torch.addmm may not be 1D tensor, I will make the strategy broadcastable in a separate PR.

@YuliangLiu0306 YuliangLiu0306 merged commit 247a9db into hpcaitech:main Sep 29, 2022
@FrankLeeeee FrankLeeeee deleted the hotfix/linear-strategy-for-bias branch January 26, 2023 07:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants