We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
你好~ 在数据预处理时,如果句子过短的话需要在句尾填充0,那在将填充的部分转换为word embedding的时候是直接找0的那个embedding吗?如果是这样的话不会导致学习到错误的特征吗?
The text was updated successfully, but these errors were encountered:
对,补全0并寻找0对应的Embedding, 一般情况下,这样是没问题的。 如果觉得有问题的话,可以使用Mask Padding,就是记录0的index,然后直接mask掉。这样就不会计算0了。不过会加大计算量。
Sorry, something went wrong.
那我还有两个问题~
(1)一般Mask的时候 卷积正常计算呀,pooling的时候根据index进行mask。而且绝大部分的mask都不用循环,而是用其他整合为向量运算。 (2)mask不影响 position embedding呀 即使填充了0,也没有关系,反正不会影响其他words的position
哦哦!是这样,明白了!想了好久还能这样实现,谢谢了😀!
No branches or pull requests
你好~
在数据预处理时,如果句子过短的话需要在句尾填充0,那在将填充的部分转换为word embedding的时候是直接找0的那个embedding吗?如果是这样的话不会导致学习到错误的特征吗?
The text was updated successfully, but these errors were encountered: