You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
Describe the bug
I'm trying to train language models using the built-in LanguageModelingReader and LanguageModel. I get TypeError: forward() got an unexpected keyword argument 'input_tokens', even though the LanguageModelingReader yields Instances with 'input_tokens' Fields (see here).
The full stack trace is:
2019-10-04 09:24:00,362 - INFO - allennlp.training.trainer - Training
0%| | 0/34685 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/bacon/miniconda/bin/allennlp", line 10, in <module>
sys.exit(run())
File "/Users/bacon/miniconda/lib/python3.7/site-packages/allennlp/run.py", line 18, in run
main(prog="allennlp")
File "/Users/bacon/miniconda/lib/python3.7/site-packages/allennlp/commands/__init__.py", line 102, in main
args.func(args)
File "/Users/bacon/miniconda/lib/python3.7/site-packages/allennlp/commands/train.py", line 124, in train_model_from_args
args.cache_prefix)
File "/Users/bacon/miniconda/lib/python3.7/site-packages/allennlp/commands/train.py", line 168, in train_model_from_file
cache_directory, cache_prefix)
File "/Users/bacon/miniconda/lib/python3.7/site-packages/allennlp/commands/train.py", line 252, in train_model
metrics = trainer.train()
File "/Users/bacon/miniconda/lib/python3.7/site-packages/allennlp/training/trainer.py", line 478, in train
train_metrics = self._train_epoch(epoch)
File "/Users/bacon/miniconda/lib/python3.7/site-packages/allennlp/training/trainer.py", line 320, in _train_epoch
loss = self.batch_loss(batch_group, for_training=True)
File "/Users/bacon/miniconda/lib/python3.7/site-packages/allennlp/training/trainer.py", line 261, in batch_loss
output_dict = self.model(**batch)
File "/Users/bacon/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'input_tokens'
which was slightly manually edited from the output of the configuration wizard.
Run allennlp train config.json -s model, with a corpus in "corpus.txt" and an empty directory "model". As you can see from the traceback, everything up until the actual training works fine, but then it produces the error above.
Expected behavior
I would have expected that naming the token_indexer and and token_embedder "input_tokens" would be all you need to do in order to use the out-of-the-box model with the out-of-the-box dataset reader.
System (please complete the following information):
OS: OSX
Python version: 3.7.4
AllenNLP version: v0.9.0
PyTorch version: v1.2.0 from AllenNLP
Additional context
This issue is very similar to the discussion in #2528. I googled around, checked StackOverflow and other related issues in allennlp but did not find any solutions. The documentation and tutorial does make it very clear that "The forward method expects dicts of tensors as input, and it expects their names to be the names of the fields in your Instance.", but in this case I think I'm doing that and I'm still getting an error.
The text was updated successfully, but these errors were encountered:
Describe the bug
I'm trying to train language models using the built-in
LanguageModelingReader
andLanguageModel
. I getTypeError: forward() got an unexpected keyword argument 'input_tokens'
, even though theLanguageModelingReader
yields Instances with 'input_tokens' Fields (see here).The full stack trace is:
To Reproduce
which was slightly manually edited from the output of the configuration wizard.
allennlp train config.json -s model
, with a corpus in "corpus.txt" and an empty directory "model". As you can see from the traceback, everything up until the actual training works fine, but then it produces the error above.Expected behavior
I would have expected that naming the token_indexer and and token_embedder "input_tokens" would be all you need to do in order to use the out-of-the-box model with the out-of-the-box dataset reader.
System (please complete the following information):
Additional context
This issue is very similar to the discussion in #2528. I googled around, checked StackOverflow and other related issues in allennlp but did not find any solutions. The documentation and tutorial does make it very clear that "The forward method expects dicts of tensors as input, and it expects their names to be the names of the fields in your Instance.", but in this case I think I'm doing that and I'm still getting an error.
The text was updated successfully, but these errors were encountered: