Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于DIN的疑问 #4

Closed
yanduoduan opened this issue Apr 6, 2022 · 0 comments
Closed

关于DIN的疑问 #4

yanduoduan opened this issue Apr 6, 2022 · 0 comments

Comments

@yanduoduan
Copy link

yanduoduan commented Apr 6, 2022

大佬您好,最近看您的DIN代码,有一些地方不太明白,希望得到您的解答!
1、mask = (behaviors_x > 0).float().unsqueeze(-1) 这里的msak的具体作用是啥啊?为什么需要这个mask呢?
这里的 注意力输入部分,原始的好像没有queries - user_behavior吧?为啥有这一项呢
2、attn_input = torch.cat([queries, user_behavior,
queries - user_behavior,
queries * user_behavior], dim = -1)
3、 output = user_behavior.mul(attns.mul(mask)) # batch * seq_len * embed_dim
这个里面为啥还有mask呢?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant