-
Notifications
You must be signed in to change notification settings - Fork 26.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Small change to Wav2Vec2 model to support Tensor-Parallelism with DeepSpeed #14298
Changes from 3 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -171,7 +171,10 @@ def forward( | |
# if key_value_states are provided this layer is used as a cross-attention layer | ||
# for the decoder | ||
is_cross_attention = key_value_states is not None | ||
bsz, tgt_len, embed_dim = hidden_states.size() | ||
|
||
# Use the class's parameter as the hidden_state's last dimension. | ||
# This dimension cannot be used in case of enabling tensor-parallelism. | ||
bsz, tgt_len, _ = hidden_states.size() | ||
|
||
# get query proj | ||
query_states = self.q_proj(hidden_states) * self.scaling | ||
|
@@ -257,7 +260,13 @@ def forward( | |
|
||
attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim) | ||
attn_output = attn_output.transpose(1, 2) | ||
attn_output = attn_output.reshape(bsz, tgt_len, embed_dim) | ||
|
||
# Use the embed_dim from class rather than hidden_state, this is due to | ||
# the reason that attn_output can be partitioned across GPUs | ||
# when using tensor-parallelism, in which case the embed_dimension from | ||
# the input is not equal to the attention's last dimension after merging | ||
# heads. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We have a 119 char limits so you can use more horizontal space :-) Also, I suggest the following change, more to the point:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure, I will reformat this :) |
||
attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) | ||
|
||
attn_output = self.out_proj(attn_output) | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this comment is useful when reading the new code. It creates more confusion than help, only the next one is really important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, I can remove this.