Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: expand(torch.LongTensor{[50, 1]}, size=[50]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (2) #32

Closed
weixiao12345678 opened this issue May 29, 2018 · 4 comments

Comments

@weixiao12345678
Copy link

I was trying to run the program by
python3 train_wc.py --gpu -1 --train_file ./data/ner/train.txt --dev_file ./data/ner/testa.txt --test_file ./data/ner/testb.txt --checkpoint ./checkpoint/ner_ --caseless --fine_tune --high_way --co_train --least_iters 100
I got the following error:

embedding size: '400060'
constructing dataset
building model
/usr/local/python3/lib/python3.6/site-packages/torch/nn/modules/rnn.py:38

: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.55 and num_layers=1
"num_layers={}".format(dropout, num_layers))
/home/yankai/weixiao/LM-LSTM-CRF/model/utils.py:805

: UserWarning: nn.init.uniform is now deprecated in favor of nn.init.uniform_.
nn.init.uniform(input_linear.weight, -bias, bias)
/home/yankai/weixiao/LM-LSTM-CRF/model/utils.py:816

: UserWarning: nn.init.uniform is now deprecated in favor of nn.init.uniform_.
nn.init.uniform(weight, -bias, bias)
/home/yankai/weixiao/LM-LSTM-CRF/model/utils.py:819

: UserWarning: nn.init.uniform is now deprecated in favor of nn.init.uniform_.
nn.init.uniform(weight, -bias, bias)

  • Tot it 1406 (epoch 0): 0it [00:00, ?it/s]train_wc.py:201

: UserWarning: torch.nn.utils.clip_grad_norm is now deprecated in favor of torch.nn.utils.clip_grad_norm_.
nn.utils.clip_grad_norm(ner_model.parameters(), args.clip_grad)
Traceback (most recent call last):
File "train_wc.py

", line 212, in
dev_f1, dev_pre, dev_rec, dev_acc = evaluator.calc_score(ner_model, dev_dataset_loader)
File "/home/yankai/weixiao/LM-LSTM-CRF/model/evaluator.py

", line 209, in calc_score
decoded = self.decoder.decode(scores.data, mask_v.data)
File "/home/yankai/weixiao/LM-LSTM-CRF/model/crf.py

", line 379, in decode
decode_idx[idx] = pointer
RuntimeError: expand(torch.LongTensor{[50, 1]}, size=[50]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (2)

@SeekPoint
Copy link

how to fix it

@ahmadshabbir2468
Copy link

? How to fix

Namespace(batch_size=10, caseless=True, char_dim=30, char_hidden=300, char_layers=1, checkpoint='./checkpoint/ner_', clip_grad=5.0, co_train=True, dev_file='./data/ner/testa.txt', drop_out=0.2, emb_file='./embedding/glove.6B.100d.txt', epoch=200, eva_matrix='fa', fine_tune=False, gpu=0, high_way=True, highway_layers=1, lambda0=1, least_iters=20, load_check_point='', load_opt=False, lr=0.015, lr_decay=0.05, mini_count=5, momentum=0.9, patience=15, rand_embedding=False, shrink_embedding=False, small_crf=True, start_epoch=0, test_file='./data/ner/testb.txt', train_file='./data/ner/train.txt', unk='unk', update='sgd', word_dim=100, word_hidden=300, word_layers=1)

loading corpus
constructing coding table
feature size: '171'
loading embedding
embedding size: '400002'
constructing dataset
building model

-------> /usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py:54: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
"num_layers={}".format(dropout, num_layers))
device: 0
Traceback (most recent call last):
File "train_wc.py", line 162, in
ner_model.cuda()
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 265, in cuda
return self._apply(lambda t: t.cuda(device))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 193, in _apply
module._apply(fn)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py", line 127, in _apply
self.flatten_parameters()
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py", line 123, in flatten_parameters
self.batch_first, bool(self.bidirectional))
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

@zakarianamikaz
Copy link

same error can u show me how you did plz

@vasilikivmo
Copy link

@weixiao12345678 you have closed this issue. Can you please explain how you solved it? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants