-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hello #4
Comments
Hi there, First, I want to remind that Second, in this script, we hardcoded the sequence length as 50, that's why the positional embedding has shape [1,55,41] (50 + 5 CLS tokens = 55). You should either 1) change your input sequence length to 50, something like below: # embed_dim is the width of the transformer, we suggest to set it as feat_dim, 41 is a quite weird value to me
a = GOPT(embed_dim=41, num_heads=4, depth=2)
# [batch_size, seq_len, feat_dim]
data = torch.randn(2, 50, 84)
# [batch_size, seq_len]
phn = torch.randint(40, (2, 50))
a(data, phn) or 2) modify the GOPT model sequence length to be your sequence length + 5 (i.e., in your example, if your sequence length is 4, than you should set positional embedding length of 4+5=9). Line 145 in 5ec31ca
Hope this help. -Yuan |
Also, if you do have access to Google Colab, I'd suggest starting from our colab example at https://colab.research.google.com/github/YuanGongND/gopt/blob/master/colab/GOPT_GPU.ipynb |
Ok, I got it! thanks for your help, now I can run it. |
gopt/src/models/gopt.py
Line 297 in 5ec31ca
hello, I met a problem. If i suppose the batch_size=2, shape of x is (2,4,41), the shape of self.pos_embed is (1,55,41). x = x + self.pos_embed will be an error. Can you give me some suggests? thx!!
my test example is:
a = GOPTNoPhn(41,4,2)
data = torch.randn(2,4,84)
phn = torch.tensor([[1,4,2,-1],[4,2,3,38]])
a(data,phn)
The text was updated successfully, but these errors were encountered: