-
Notifications
You must be signed in to change notification settings - Fork 816
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed MHA documentation #883
Conversation
@zhangguanheng66 You may want to check this out. Thanks. |
@@ -110,7 +110,7 @@ def __init__(self, dropout=0.0, batch_first=False): | |||
as `(batch, seq, feature)`. Default: ``False`` | |||
|
|||
Examples:: | |||
>>> SDP = torchtext.nn.ScaledDotProduct(dropout=0.1) | |||
>>> SDP = torchtext.nn.ScaledDotProduct(dropout=0.1, batch_first=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the example here, let's show the case without batch first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhangguanheng66 Fixed. Please review.
Codecov Report
@@ Coverage Diff @@
## master #883 +/- ##
=======================================
Coverage 77.44% 77.44%
=======================================
Files 44 44
Lines 3055 3055
=======================================
Hits 2366 2366
Misses 689 689
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution.
No description provided.