Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU error when run sample code #180

Closed
vachelch opened this issue Nov 20, 2018 · 4 comments
Closed

GPU error when run sample code #180

vachelch opened this issue Nov 20, 2018 · 4 comments

Comments

@vachelch
Copy link

When I run the sample code,
python examples/sample.py --train_path $TRAIN_PATH --dev_path $DEV_PATH

GPU errors appear as below, It seems data don't satisfy a gpu tensor, I failed to solve it. Has anyone meet the error, too?


/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='elementwise_mean' instead.
warnings.warn(warning.format(ret))
2018-11-20 23:33:48,774 root INFO Namespace(dev_path='data/toy_reverse/dev/data.txt', expt_dir='./experiment', load_checkpoint=None, log_level='info', resume=False, train_path='data/toy_reverse/train/data.txt')
/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/modules/rnn.py:38: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
"num_layers={}".format(dropout, num_layers))
2018-11-20 23:33:51,817 seq2seq.trainer.supervised_trainer INFO Optimizer: Adam (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
eps: 1e-08
lr: 0.001
weight_decay: 0
), Scheduler: None
Traceback (most recent call last):
File "examples/sample.py", line 129, in
resume=opt.resume)
File "/home/Vachel/SDML/hw3-0/pytorch-seq2seq/seq2seq/trainer/supervised_trainer.py", line 186, in train
teacher_forcing_ratio=teacher_forcing_ratio)
File "/home/Vachel/SDML/hw3-0/pytorch-seq2seq/seq2seq/trainer/supervised_trainer.py", line 103, in _train_epoches
loss = self._train_batch(input_variables, input_lengths.tolist(), target_variables, model, teacher_forcing_ratio)
File "/home/Vachel/SDML/hw3-0/pytorch-seq2seq/seq2seq/trainer/supervised_trainer.py", line 55, in _train_batch
teacher_forcing_ratio=teacher_forcing_ratio)
File "/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/Vachel/SDML/hw3-0/pytorch-seq2seq/seq2seq/models/seq2seq.py", line 48, in forward
encoder_outputs, encoder_hidden = self.encoder(input_variable, input_lengths)
File "/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/Vachel/SDML/hw3-0/pytorch-seq2seq/seq2seq/models/EncoderRNN.py", line 68, in forward
embedded = self.embedding(input_var)
File "/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/modules/sparse.py", line 110, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/Vachel/env3/lib/python3.5/site-packages/torch/nn/functional.py", line 1110, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.LongTensor for argument #3 'index'

@pskrunner14
Copy link

@soHardToPickName please see #169.

@KevinMatrix
Copy link

@soHardToPickName please see #169.

your issue is not helpfule, I solve this by
change
"device = None if torch.cuda.is_available() else -1"
to
"device = torch.device("cuda" if torch.cuda.is_available() else "cpu")"
in seq2seq/trainer/supervised_trainer.py:75

use this way also in seq2seq/evaluator/evaluator.py:38

@vachelch
Copy link
Author

@soHardToPickName please see #169.

your issue is not helpfule, I solve this by
change
"device = None if torch.cuda.is_available() else -1"
to
"device = torch.device("cuda" if torch.cuda.is_available() else "cpu")"
in seq2seq/trainer/supervised_trainer.py:75

use this way also in seq2seq/evaluator/evaluator.py:38

Thanks very much for your answer! this is just the solution, and sorry for reply late~

@Huijun-Cui
Copy link

good !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants