Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about position embedder in model #9

Open
fuyansheng opened this issue Nov 10, 2023 · 1 comment
Open

Question about position embedder in model #9

fuyansheng opened this issue Nov 10, 2023 · 1 comment

Comments

@fuyansheng
Copy link

Hey, really appreciate your nice work!
I notice in the trme.py that for the global cls position embedding, the code uses self.type_embeds = nn.Embedding(100, self.dim)(line 33). However, the latter utilisation (line 155) pos = self.type_embeds(torch.arange(0, 3, device=device)) only uses three positions. So why type_embeds uses nn.Embedding(100, self.dim) instead of nn.Embedding(3, self.dim)? Will this make a difference?
Looking forward to your reply! Thanks a lot!

@leisure-7
Copy link

原论文中有写:“Similarly, three type embeddings are assigned to the special [GCLS] token embedding, the intermediate source entity embedding, and the other intermediate neighbor entity embeddings.”因此只是用三个维度我认为是正确的

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants