You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='elementwise_mean' instead.
warnings.warn(warning.format(ret))
2018-10-28 15:59:34,815 root INFO Namespace(dev_path='data/toy_reverse/dev/data.txt', expt_dir='./experiment', load_checkpoint=None, log_level='info', resume=False, train_path='data/toy_reverse/train/data.txt')
/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
/usr/local/lib/python2.7/dist-packages/torch/nn/modules/rnn.py:38: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
"num_layers={}".format(dropout, num_layers))
2018-10-28 15:59:37,915 seq2seq.trainer.supervised_trainer INFO Optimizer: Adam (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
eps: 1e-08
lr: 0.001
weight_decay: 0
), Scheduler: None
Traceback (most recent call last):
File "examples/sample.py", line 129, in <module>
resume=opt.resume)
File "/usr/local/lib/python2.7/dist-packages/seq2seq/trainer/supervised_trainer.py", line 186, in train
teacher_forcing_ratio=teacher_forcing_ratio)
File "/usr/local/lib/python2.7/dist-packages/seq2seq/trainer/supervised_trainer.py", line 103, in _train_epoches
loss = self._train_batch(input_variables, input_lengths.tolist(), target_variables, model, teacher_forcing_ratio)
File "/usr/local/lib/python2.7/dist-packages/seq2seq/trainer/supervised_trainer.py", line 55, in _train_batch
teacher_forcing_ratio=teacher_forcing_ratio)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/seq2seq/models/seq2seq.py", line 48, in forward
encoder_outputs, encoder_hidden = self.encoder(input_variable, input_lengths)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/seq2seq/models/EncoderRNN.py", line 68, in forward
embedded = self.embedding(input_var)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/sparse.py", line 110, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py", line 1110, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.LongTensor for argument #3 'index'
Then I tested dev branch. this error is no more, but training stops after 2 epochs. (??)
And as result I have wrong output sequences. Is this ok?
For the dev branch, you should change the parameters ;-) I got the same problems a couple of months ago. The batch & epoch in master were 32 & 6 and here they are 512 and 2. If you change them, it will work
I test toy example.
I use Google Colab
If I use master branch CPU then everything is good
But if I want to use GPU and run:
then I get:
Then I tested dev branch. this error is no more, but training stops after 2 epochs. (??)
And as result I have wrong output sequences. Is this ok?
I.e. I run
And I get error:
The text was updated successfully, but these errors were encountered: