You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, really appreciate your nice work!
I notice in the trme.py that for the global cls position embedding, the code uses self.type_embeds = nn.Embedding(100, self.dim)(line 33). However, the latter utilisation (line 155) pos = self.type_embeds(torch.arange(0, 3, device=device)) only uses three positions. So why type_embeds uses nn.Embedding(100, self.dim) instead of nn.Embedding(3, self.dim)? Will this make a difference?
Looking forward to your reply! Thanks a lot!
The text was updated successfully, but these errors were encountered:
原论文中有写:“Similarly, three type embeddings are assigned to the special [GCLS] token embedding, the intermediate source entity embedding, and the other intermediate neighbor entity embeddings.”因此只是用三个维度我认为是正确的
Hey, really appreciate your nice work!
I notice in the trme.py that for the global cls position embedding, the code uses
self.type_embeds = nn.Embedding(100, self.dim)
(line 33). However, the latter utilisation (line 155)pos = self.type_embeds(torch.arange(0, 3, device=device))
only uses three positions. So why type_embeds usesnn.Embedding(100, self.dim)
instead ofnn.Embedding(3, self.dim)
? Will this make a difference?Looking forward to your reply! Thanks a lot!
The text was updated successfully, but these errors were encountered: