-
Notifications
You must be signed in to change notification settings - Fork 1.1k
A bug in seq2seq translation #54
Comments
In pytorch v0.2, it removed implicit flattening for dot. pytorch/pytorch#2313 torch.dot(hidden.view(-1), energy.view(-1)) |
@czs0x55aa Thank you very much for answering. But where should I revise in this case--seq2seq translation. |
you need to modify score function in Attention Model. def score(self, hidden, encoder_output):
if self.method == 'dot':
energy =torch.dot(hidden.view(-1), encoder_output.view(-1))
elif self.method == 'general':
energy = self.attn(encoder_output)
energy = torch.dot(hidden.view(-1), energy.view(-1))
elif self.method == 'concat':
energy = self.attn(torch.cat((hidden, encoder_output), 1))
energy = torch.dot(self.v.view(-1), energy.view(-1))
return energy but this implementation will very slower in GPU. #56 |
@czs0x55aa Thank you very much! It seems work. However, it raised another error. ValueError Traceback (most recent call last) in train(input_variable, target_variable, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length) ~/anaconda/envs/pytorch_nmt3.5/lib/python3.5/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) ~/anaconda/envs/pytorch_nmt3.5/lib/python3.5/site-packages/torch/nn/modules/loss.py in forward(self, input, target) ~/anaconda/envs/pytorch_nmt3.5/lib/python3.5/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index) ValueError: Expected 2 or 4 dimensions (got 1) Btw, I am a beginner of coding. How can I deal with this kind of error raised from source code? Thank you for help! |
sorry, i'm not facing this issue. |
You should change: |
When we run the code testing the models, It raises the error :
RuntimeError: Expected argument self to have 1 dimension(s), but has 2 at /Users/soumith/miniconda2/conda-bld/pytorch_1502000696751/work/torch/csrc/generic/TensorMethods.cpp:23020
Details are show below:
The text was updated successfully, but these errors were encountered: