Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Token classification: TypeError: 'NoneType' object is not subscriptable #33

Closed
stefan-it opened this issue Jul 16, 2020 · 4 comments 路 Fixed by #52
Closed

Token classification: TypeError: 'NoneType' object is not subscriptable #33

stefan-it opened this issue Jul 16, 2020 · 4 comments 路 Fixed by #52
Labels
bug Something isn't working

Comments

@stefan-it
Copy link
Contributor

馃悰 Bug

Information

Hi, I just wanted to train an adapter with the token classification example (using CoNLL-2003 NER dataset). I'm using the following json-based configuration:

{
    "data_dir": "./data_en",
    "labels": "./data_en/labels.txt",
    "model_name_or_path": "bert-large-cased",
    "output_dir": "conll2003-en-1",
    "max_seq_length": 128,
    "num_train_epochs": 3,
    "per_device_train_batch_size": 32,
    "save_steps": 750,
    "seed": 1,
    "do_train": true,
    "do_eval": true,
    "do_predict": true,
    "fp16": true,
    "train_adapter": true,
    "adapter_config": "pfeiffer",
    "language": "en"
}

and run it with python3 run_ner.py <config>.json. Then the following error message is thrown:

Traceback (most recent call last):
  File "run_ner.py", line 323, in <module>
    main()
  File "run_ner.py", line 248, in main
    model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
  File "/mnt/adapter-transformers/src/transformers/trainer.py", line 484, in train
    tr_loss += self._training_step(model, inputs, optimizer)
  File "/mnt/adapter-transformers/src/transformers/trainer.py", line 592, in _training_step
    outputs = model(**inputs, adapter_names=self.adapter_names)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/adapter-transformers/src/transformers/modeling_bert.py", line 1463, in forward
    adapter_names=adapter_names,
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/adapter-transformers/src/transformers/modeling_bert.py", line 780, in forward
    adapter_names=adapter_names,
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/adapter-transformers/src/transformers/modeling_bert.py", line 437, in forward
    adapter_names=adapter_names,
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/adapter-transformers/src/transformers/modeling_bert.py", line 403, in forward
    layer_output = self.output(intermediate_output, attention_output, attention_mask, adapter_names=adapter_names)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/adapter-transformers/src/transformers/modeling_bert.py", line 368, in forward
    hidden_states = self.adapters_forward(hidden_states, input_tensor, attention_mask, adapter_names)
  File "/mnt/adapter-transformers/src/transformers/adapter_bert.py", line 443, in adapters_forward
    adapter_stack=adapter_stack,
  File "/mnt/adapter-transformers/src/transformers/adapter_bert.py", line 374, in adapter_stack_layer
    hidden_states, query, residual = self.get_adapter_preparams(adapter_config, hidden_states, input_tensor)
  File "/mnt/adapter-transformers/src/transformers/adapter_bert.py", line 320, in get_adapter_preparams
    if adapter_config["residual_before_ln"]:
TypeError: 'NoneType' object is not subscriptable

Do I need to provide additional options 馃

Thanks many in advance,

Stefan

Environment info

  • transformers version: latest from master, ee2adad
  • Platform: nvidia/cuda:10.2-cudnn7-devel
  • Python version: 3.6.9
  • PyTorch version (GPU?): 1.5.1 + GPU
  • Tensorflow version (GPU?): None
  • Using GPU in script?: Yes + fp16
  • Using distributed or parallel set-up in script?: No
@stefan-it stefan-it added the bug Something isn't working label Jul 16, 2020
@JoPfeiff
Copy link
Member

Hey can you try setting "language": None. language is only used when stacking a task adapter (such as NER) on a pre-trained language adapter (such es en). You would need to additionally load a language adapter such as en/wiki@ukp to use this flag with e.g. bert-base-multilingual-cased.
Thanks for pointing this out, this is a bug and we will fix it!

@stefan-it
Copy link
Contributor Author

stefan-it commented Jul 16, 2020

Hi @JoPfeiff ,

thanks for that hint - it's working now! I made some experiments with the WNUT-17 dataset (not on CoNLL-2003 as originally stated in the issue) to compare against a fully fine-tuned model:

Model Run 1 Run 2 Run 3 Run 4 Run 5 Avg.
Fully fine-tuned (60.60) / 48.65 (58.58) / 47.79 (58.24) / 48.48 (61.10) / 48.06 (58.47) / 48.34 (59.40) / 48.26
Adapter (Pfeiffer) (57.81) / 46.36 (58.31) / 45.89 (58.31) / 46.78 (55.78) / 46.51 (57.75) / 46.85 (57.59) / 46.48

There's currently a performance diff of 1.78%, so do you have any hints for boosting performance 馃 Thanks 鉂わ笍

@JoPfeiff
Copy link
Member

We have found that due to the newly introduced parameters, adapters require a higher learning rate. A default lr=0.0001 seems to work well in most cases. Also, depending on the dataset size you might need to train longer than usual.
We usually trained for up to 30 epochs and stored the best model based on the results on the dev set.

You can also try out the houlsby architecture or increase the bottleneck_size (currently set with reduction_factor).

We will soon add a small guide on how to train adapters where these types of tipps will be included :)

@JoPfeiff
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
2 participants