-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR:ignite.engine.engine.Engine:Current run is terminating due to exception: 'NoneType' object has no attribute 'data'. ERROR:ignite.engine.engine.Engine:Engine run is terminating due to exception: 'NoneType' object has no attribute 'data'. #14
Comments
Hi polarsun!
So I think I know what's happening, and it's related to this issue:
#4 (comment) . When I
developed this code, gradients for pytorch tensors were initialized by
default. In recent versions of pytorch, tensors do not have a .grad
attribute unless they are needed in the backprop computation. Under your
settings, you are not using pretrained embeddings but by default the code
does not update the word embeddings during training since it assumes that
you are using glove or some other pretrained embeddings. Since it is not
updating the embeddings during training they do not have a .grad attribute.
You can patch the code yourself as described in the link above to fix the
issue. Unfortunately I don't have the bandwidth right now to keep this
repository up to date with the latest versions of pytorch.
Additionally, since you are not using pretrained embeddings you should
probably learn them. To do so, I would add --update-rule update-all
after the --emb argument in your command line. I think doing this will
actually fix the issue. In any case, while I don't know what your use case
is, the ami dataset is quite small and you probably will want to use
pretrained embeddings anyway -- see the readme for doing so.
Lemme know if you need more help patching the code!
Best,
Chris
…On Sun, Jul 26, 2020 at 9:36 AM polarsun ***@***.***> wrote:
when I try to use this repo to run the AMI dataset , I got this error ,can
you help me?
My command is:
python train_model.py --trainer --train-inputs
/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/inputs/train/
--train-labels
/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/labels/train/
--valid-inputs
/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/inputs/valid/
--valid-labels
/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/labels/valid/
--valid-refs
/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/human-abstracts/valid/
--model best_model --results results/ --seed 12345678 --emb --enc cnn --ext
s2s --bidirectional
the error message are follows:
`{'train_inputs':
PosixPath('/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/inputs/train'),
'train_labels':
PosixPath('/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/labels/train'),
'valid_inputs':
PosixPath('/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/inputs/valid'),
'valid_labels':
PosixPath('/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/labels/valid'),
'valid_refs':
PosixPath('/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/human-abstracts/valid'),
'seed': 12345678, 'epochs': 50, 'batch_size': 32, 'gpu': -1,
'teacher_forcing': 25, 'sentence_limit': 50, 'weighted': False,
'loader_workers': 8, 'raml_samples': 25, 'raml_temp': 0.05,
'summary_length': 100, 'remove_stopwords': False, 'shuffle_sents': False,
'model': PosixPath('best_model'), 'results': PosixPath('results')}
{'embedding_size': 200, 'pretrained_embeddings': None, 'top_k': None,
'at_least': 1, 'word_dropout': 0.0, 'embedding_dropout': 0.25,
'update_rule': 'fix-all', 'filter_pretrained': False}
{'dropout': 0.25, 'filter_windows': [1, 2, 3, 4, 5, 6], 'feature_maps':
[25, 25, 50, 50, 50, 50], 'OPT': 'cnn'}
{'hidden_size': 300, 'bidirectional': True, 'rnn_dropout': 0.25,
'num_layers': 1, 'cell': 'gru', 'mlp_layers': [100], 'mlp_dropouts':
[0.25], 'OPT': 's2s'}
Initializing vocabulary and embeddings.
INFO:root: Creating new embeddings with normal initializaion.
INFO:root: # Unique Words: 8663
INFO:root: After filtering, # Unique Words: 8665
WARNING:root: Embeddings are randomly initialized but update rule is not
'update-all'
INFO:root: EmbeddingContext(
(embeddings): Embedding(8665, 200, padding_idx=0)
)
Loading training data.
Loading validation data.
INFO:root: Model parameter initialization started.
INFO:root: EmbeddingContext initialization started.
INFO:root: Initializing with random normal.
INFO:root: EmbeddingContext initialization finished.
INFO:root: CNNSentenceEncoder initialization started.
INFO:root: filters.0.weight (25,1,1,1,200): Xavier normal init.
INFO:root: filters.0.bias (25): constant (0) init.
INFO:root: filters.1.weight (25,1,1,2,200): Xavier normal init.
INFO:root: filters.1.bias (25): constant (0) init.
INFO:root: filters.2.weight (50,1,1,3,200): Xavier normal init.
INFO:root: filters.2.bias (50): constant (0) init.
INFO:root: filters.3.weight (50,1,1,4,200): Xavier normal init.
INFO:root: filters.3.bias (50): constant (0) init.
INFO:root: filters.4.weight (50,1,1,5,200): Xavier normal init.
INFO:root: filters.4.bias (50): constant (0) init.
INFO:root: filters.5.weight (50,1,1,6,200): Xavier normal init.
INFO:root: filters.5.bias (50): constant (0) init.
INFO:root: CNNSentenceEncoder initialization finished.
INFO:root: Seq2SeqSentenceExtractor initialization started.
INFO:root: decoder_start (250): random normal init.
INFO:root: encoder_rnn.weight_ih_l0 (900,250): Xavier normal init.
INFO:root: encoder_rnn.weight_hh_l0 (900,300): Xavier normal init.
INFO:root: encoder_rnn.bias_ih_l0 (900): constant (0) init.
INFO:root: encoder_rnn.bias_hh_l0 (900): constant (0) init.
INFO:root: encoder_rnn.weight_ih_l0_reverse (900,250): Xavier normal init.
INFO:root: encoder_rnn.weight_hh_l0_reverse (900,300): Xavier normal init.
INFO:root: encoder_rnn.bias_ih_l0_reverse (900): constant (0) init.
INFO:root: encoder_rnn.bias_hh_l0_reverse (900): constant (0) init.
INFO:root: decoder_rnn.weight_ih_l0 (900,250): Xavier normal init.
INFO:root: decoder_rnn.weight_hh_l0 (900,300): Xavier normal init.
INFO:root: decoder_rnn.bias_ih_l0 (900): constant (0) init.
INFO:root: decoder_rnn.bias_hh_l0 (900): constant (0) init.
INFO:root: decoder_rnn.weight_ih_l0_reverse (900,250): Xavier normal init.
INFO:root: decoder_rnn.weight_hh_l0_reverse (900,300): Xavier normal init.
INFO:root: decoder_rnn.bias_ih_l0_reverse (900): constant (0) init.
INFO:root: decoder_rnn.bias_hh_l0_reverse (900): constant (0) init.
INFO:root: mlp.0.weight (100,1200): Xavier normal init.
INFO:root: mlp.0.bias (100): constant (0) init.
INFO:root: mlp.3.weight (1,100): Xavier normal init.
INFO:root: mlp.3.bias (1): constant (0) init.
INFO:root: Seq2SeqSentenceExtractor initialization finished.
INFO:root: Model parameter initialization finished.
INFO:ignite.engine.engine.Engine:Engine run starting with max_epochs=50.
ERROR:ignite.engine.engine.Engine:Current run is terminating due to
exception: 'NoneType' object has no attribute 'data'.
ERROR:ignite.engine.engine.Engine:Engine run is terminating due to
exception: 'NoneType' object has no attribute 'data'.
Traceback (most recent call last):
File "train_model.py", line 79, in
main()
File "train_model.py", line 76, in main
results_path=args["trainer"]["results"])
File
"/home/qxm/anaconda3/lib/python3.7/site-packages/nnsum-1.0-py3.7.egg/nnsum/trainer/labels_mle_trainer.py",
line 164, in labels_mle_trainer
File
"/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py",
line 658, in run
return self._internal_run()
File
"/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py",
line 729, in _internal_run
self._handle_exception(e)
File
"/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py",
line 437, in _handle_exception
raise e
File
"/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py",
line 697, in _internal_run
time_taken = self._run_once_on_dataset()
File
"/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py",
line 795, in _run_once_on_dataset
self._handle_exception(e)
File
"/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py",
line 437, in _handle_exception
raise e
File
"/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py",
line 778, in _run_once_on_dataset
self.state.output = self._process_function(self, self.state.batch)
File
"/home/qxm/anaconda3/lib/python3.7/site-packages/nnsum-1.0-py3.7.egg/nnsum/trainer/labels_mle_trainer.py",
line 188, in _update
AttributeError: 'NoneType' object has no attribute 'data'
`
How can I fix it?
Best wishes for you!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#14>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABAZOM6HD7GA5YSNQBRPJXDR5QWNRANCNFSM4PH7PGPQ>
.
--
Chris Kedzie
PhD Student, Dept. of Computer Science
Columbia University
email: kedzie@cs.columbia.edu
web: www.cs.columbia.edu/~kedzie
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
when I try to use this repo to run the AMI dataset , I got this error ,can you help me?
My command is:
python train_model.py --trainer --train-inputs /home/workspace/IdeaProjects/word2sentExtract/datasets/ami/inputs/train/ --train-labels /home/workspace/IdeaProjects/word2sentExtract/datasets/ami/labels/train/ --valid-inputs /home/workspace/IdeaProjects/word2sentExtract/datasets/ami/inputs/valid/ --valid-labels /home/workspace/IdeaProjects/word2sentExtract/datasets/ami/labels/valid/ --valid-refs /home/workspace/IdeaProjects/word2sentExtract/datasets/ami/human-abstracts/valid/ --model best_model --results results/ --seed 12345678 --emb --enc cnn --ext s2s --bidirectional
the error message are follows:
`{'train_inputs': PosixPath('/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/inputs/train'), 'train_labels': PosixPath('/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/labels/train'), 'valid_inputs': PosixPath('/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/inputs/valid'), 'valid_labels': PosixPath('/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/labels/valid'), 'valid_refs': PosixPath('/home/workspace/IdeaProjects/word2sentExtract/datasets/ami/human-abstracts/valid'), 'seed': 12345678, 'epochs': 50, 'batch_size': 32, 'gpu': -1, 'teacher_forcing': 25, 'sentence_limit': 50, 'weighted': False, 'loader_workers': 8, 'raml_samples': 25, 'raml_temp': 0.05, 'summary_length': 100, 'remove_stopwords': False, 'shuffle_sents': False, 'model': PosixPath('best_model'), 'results': PosixPath('results')}
{'embedding_size': 200, 'pretrained_embeddings': None, 'top_k': None, 'at_least': 1, 'word_dropout': 0.0, 'embedding_dropout': 0.25, 'update_rule': 'fix-all', 'filter_pretrained': False}
{'dropout': 0.25, 'filter_windows': [1, 2, 3, 4, 5, 6], 'feature_maps': [25, 25, 50, 50, 50, 50], 'OPT': 'cnn'}
{'hidden_size': 300, 'bidirectional': True, 'rnn_dropout': 0.25, 'num_layers': 1, 'cell': 'gru', 'mlp_layers': [100], 'mlp_dropouts': [0.25], 'OPT': 's2s'}
Initializing vocabulary and embeddings.
INFO:root: Creating new embeddings with normal initializaion.
INFO:root: # Unique Words: 8663
INFO:root: After filtering, # Unique Words: 8665
WARNING:root: Embeddings are randomly initialized but update rule is not 'update-all'
INFO:root: EmbeddingContext(
(embeddings): Embedding(8665, 200, padding_idx=0)
)
Loading training data.
Loading validation data.
INFO:root: Model parameter initialization started.
INFO:root: EmbeddingContext initialization started.
INFO:root: Initializing with random normal.
INFO:root: EmbeddingContext initialization finished.
INFO:root: CNNSentenceEncoder initialization started.
INFO:root: filters.0.weight (25,1,1,1,200): Xavier normal init.
INFO:root: filters.0.bias (25): constant (0) init.
INFO:root: filters.1.weight (25,1,1,2,200): Xavier normal init.
INFO:root: filters.1.bias (25): constant (0) init.
INFO:root: filters.2.weight (50,1,1,3,200): Xavier normal init.
INFO:root: filters.2.bias (50): constant (0) init.
INFO:root: filters.3.weight (50,1,1,4,200): Xavier normal init.
INFO:root: filters.3.bias (50): constant (0) init.
INFO:root: filters.4.weight (50,1,1,5,200): Xavier normal init.
INFO:root: filters.4.bias (50): constant (0) init.
INFO:root: filters.5.weight (50,1,1,6,200): Xavier normal init.
INFO:root: filters.5.bias (50): constant (0) init.
INFO:root: CNNSentenceEncoder initialization finished.
INFO:root: Seq2SeqSentenceExtractor initialization started.
INFO:root: decoder_start (250): random normal init.
INFO:root: encoder_rnn.weight_ih_l0 (900,250): Xavier normal init.
INFO:root: encoder_rnn.weight_hh_l0 (900,300): Xavier normal init.
INFO:root: encoder_rnn.bias_ih_l0 (900): constant (0) init.
INFO:root: encoder_rnn.bias_hh_l0 (900): constant (0) init.
INFO:root: encoder_rnn.weight_ih_l0_reverse (900,250): Xavier normal init.
INFO:root: encoder_rnn.weight_hh_l0_reverse (900,300): Xavier normal init.
INFO:root: encoder_rnn.bias_ih_l0_reverse (900): constant (0) init.
INFO:root: encoder_rnn.bias_hh_l0_reverse (900): constant (0) init.
INFO:root: decoder_rnn.weight_ih_l0 (900,250): Xavier normal init.
INFO:root: decoder_rnn.weight_hh_l0 (900,300): Xavier normal init.
INFO:root: decoder_rnn.bias_ih_l0 (900): constant (0) init.
INFO:root: decoder_rnn.bias_hh_l0 (900): constant (0) init.
INFO:root: decoder_rnn.weight_ih_l0_reverse (900,250): Xavier normal init.
INFO:root: decoder_rnn.weight_hh_l0_reverse (900,300): Xavier normal init.
INFO:root: decoder_rnn.bias_ih_l0_reverse (900): constant (0) init.
INFO:root: decoder_rnn.bias_hh_l0_reverse (900): constant (0) init.
INFO:root: mlp.0.weight (100,1200): Xavier normal init.
INFO:root: mlp.0.bias (100): constant (0) init.
INFO:root: mlp.3.weight (1,100): Xavier normal init.
INFO:root: mlp.3.bias (1): constant (0) init.
INFO:root: Seq2SeqSentenceExtractor initialization finished.
INFO:root: Model parameter initialization finished.
INFO:ignite.engine.engine.Engine:Engine run starting with max_epochs=50.
ERROR:ignite.engine.engine.Engine:Current run is terminating due to exception: 'NoneType' object has no attribute 'data'.
ERROR:ignite.engine.engine.Engine:Engine run is terminating due to exception: 'NoneType' object has no attribute 'data'.
Traceback (most recent call last):
File "train_model.py", line 79, in
main()
File "train_model.py", line 76, in main
results_path=args["trainer"]["results"])
File "/home/qxm/anaconda3/lib/python3.7/site-packages/nnsum-1.0-py3.7.egg/nnsum/trainer/labels_mle_trainer.py", line 164, in labels_mle_trainer
File "/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py", line 658, in run
return self._internal_run()
File "/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py", line 729, in _internal_run
self._handle_exception(e)
File "/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py", line 437, in _handle_exception
raise e
File "/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py", line 697, in _internal_run
time_taken = self._run_once_on_dataset()
File "/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py", line 795, in _run_once_on_dataset
self._handle_exception(e)
File "/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py", line 437, in _handle_exception
raise e
File "/home/qxm/anaconda3/lib/python3.7/site-packages/pytorch_ignite-0.5.0.dev20200721-py3.7.egg/ignite/engine/engine.py", line 778, in _run_once_on_dataset
self.state.output = self._process_function(self, self.state.batch)
File "/home/qxm/anaconda3/lib/python3.7/site-packages/nnsum-1.0-py3.7.egg/nnsum/trainer/labels_mle_trainer.py", line 188, in _update
AttributeError: 'NoneType' object has no attribute 'data'
`
How can I fix it?
Best wishes for you!
The text was updated successfully, but these errors were encountered: