We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In position_encoding_init, shouldn't it be
position_encoding_init
[pos / np.power(10000, (i//2)*2 / d_pos_vec ) for i in range(d_pos_vec)]
instead of
[pos / np.power(10000, 2*i/d_pos_vec) for i in range(d_pos_vec)]
In the original formulation, for positions 2i and 2i+1, the power should be 2i / d_model.
2i
2i+1
2i / d_model
The text was updated successfully, but these errors were encountered:
fix position encoding in #32
7fa8c63
Hi @jasonleeinf
Thank you so much to point out this mistake! This part has been fixed in 7fa8c63. Please take a look. Thanks!
Yu-Hsiang
Sorry, something went wrong.
Fix bug for sinsoidal encoding
13772d8
jadore801120/attention-is-all-you-need-pytorch#32
a2eb566
No branches or pull requests
In
position_encoding_init
, shouldn't it be[pos / np.power(10000, (i//2)*2 / d_pos_vec ) for i in range(d_pos_vec)]
instead of
[pos / np.power(10000, 2*i/d_pos_vec) for i in range(d_pos_vec)]
In the original formulation, for positions
2i
and2i+1
, the power should be2i / d_model
.The text was updated successfully, but these errors were encountered: