Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(NER) Error when training a custom dataset with document-level #22

Closed
jcazeredo opened this issue Oct 23, 2021 · 3 comments
Closed

(NER) Error when training a custom dataset with document-level #22

jcazeredo opened this issue Oct 23, 2021 · 3 comments

Comments

@jcazeredo
Copy link

jcazeredo commented Oct 23, 2021

Hello everyone,

I'm having an error trying to use a custom dataset and training it with a document-level model for the NER task. In this case, I'm copying the configuration file xlnet-doc-en-ner-finetune.yaml and modifying, based on the documentation provided in this repository, to use my custom dataset.

This custom dataset has 3 files: train, dev, and test. There are 10 different labels (IOB2 annotation scheme). I created a tag_dictionary based on the same way this file was created, just adding more tags according to my necessities. The format, structure, and everything else are identical to the normal CONLL2003 used in this repository. I've been using this custom dataset with sentence-level models and I had no problems at all. Errors do only happen when I use document-level models. I also should mention that the files I'm using are updated within the last commit of this repository.

This is the error:

(ace) azeredo@ix-ws28:~/test_ace/test_recent_ace/ACE$ CUDA_VISIBLE_DEVICES=0 python train.py --config /home/azeredo/test_ace/test_recent_ace/ACE/config/xlnet-doc-en-ner-finetune.yaml
/home/azeredo/test_ace/test_recent_ace/ACE/flair/utils/params.py:104: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  dict_merge.dict_merge(params_dict, yaml.load(f))
2021-10-23 14:58:07,820 Reading data from /home/azeredo/test_ace/data/harem_conll
2021-10-23 14:58:07,820 Train: /home/azeredo/test_ace/data/harem_conll/harem_default.train
2021-10-23 14:58:07,820 Dev: /home/azeredo/test_ace/data/harem_conll/harem_default.dev
2021-10-23 14:58:07,820 Test: /home/azeredo/test_ace/data/harem_conll/harem_default.test
2021-10-23 14:58:10,804 {b'<unk>': 0, b'<START>': 1, b'<STOP>': 2, b'O': 3, b'B-PESSOA': 4, b'I-PESSOA': 5, b'S-PESSOA': 6, b'E-PESSOA': 7, b'B-ORGANIZACAO': 8, b'I-ORGANIZACAO': 9, b'S-ORGANIZACAO': 10, b'E-ORGANIZACAO': 11, b'B-TEMPO': 12, b'I-TEMPO': 13, b'S-TEMPO': 14, b'E-TEMPO': 15, b'B-LOCAL': 16, b'I-LOCAL': 17, b'S-LOCAL': 18, b'E-LOCAL': 19, b'B-OBRA': 20, b'I-OBRA': 21, b'S-OBRA': 22, b'E-OBRA': 23, b'B-ACONTECIMENTO': 24, b'I-ACONTECIMENTO': 25, b'S-ACONTECIMENTO': 26, b'E-ACONTECIMENTO': 27, b'B-ABSTRACCAO': 28, b'I-ABSTRACCAO': 29, b'S-ABSTRACCAO': 30, b'E-ABSTRACCAO': 31, b'B-COISA': 32, b'I-COISA': 33, b'S-COISA': 34, b'E-COISA': 35, b'B-VALOR': 36, b'I-VALOR': 37, b'S-VALOR': 38, b'E-VALOR': 39, b'B-VARIADO': 40, b'I-VARIADO': 41, b'S-VARIADO': 42, b'E-VARIADO': 43}
2021-10-23 14:58:10,804 Corpus: 5155 train + 137 dev + 3658 test sentences
[2021-10-23 14:58:11,302 INFO] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json from cache at /home/azeredo/.cache/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6
[2021-10-23 14:58:11,303 INFO] Model config XLNetConfig {
  "architectures": [
    "XLNetLMHeadModel"
  ],
  "attn_type": "bi",
  "bi_data": false,
  "bos_token_id": 1,
  "clamp_len": -1,
  "d_head": 64,
  "d_inner": 3072,
  "d_model": 768,
  "dropout": 0.1,
  "end_n_top": 5,
  "eos_token_id": 2,
  "ff_activation": "gelu",
  "initializer_range": 0.02,
  "layer_norm_eps": 1e-12,
  "mem_len": null,
  "model_type": "xlnet",
  "n_head": 12,
  "n_layer": 12,
  "pad_token_id": 5,
  "reuse_len": null,
  "same_length": false,
  "start_n_top": 5,
  "summary_activation": "tanh",
  "summary_last_dropout": 0.1,
  "summary_type": "last",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 250
    }
  },
  "untie_r": true,
  "vocab_size": 32000
}

[2021-10-23 14:58:11,757 INFO] loading file https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-spiece.model from cache at /home/azeredo/.cache/torch/transformers/dad589d582573df0293448af5109cb6981ca77239ed314e15ca63b7b8a318ddd.8b10bd978b5d01c21303cc761fc9ecd464419b3bf921864a355ba807cfbfafa8
[2021-10-23 14:58:12,262 INFO] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json from cache at /home/azeredo/.cache/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6
[2021-10-23 14:58:12,263 INFO] Model config XLNetConfig {
  "architectures": [
    "XLNetLMHeadModel"
  ],
  "attn_type": "bi",
  "bi_data": false,
  "bos_token_id": 1,
  "clamp_len": -1,
  "d_head": 64,
  "d_inner": 3072,
  "d_model": 768,
  "dropout": 0.1,
  "end_n_top": 5,
  "eos_token_id": 2,
  "ff_activation": "gelu",
  "initializer_range": 0.02,
  "layer_norm_eps": 1e-12,
  "mem_len": null,
  "model_type": "xlnet",
  "n_head": 12,
  "n_layer": 12,
  "output_hidden_states": true,
  "pad_token_id": 5,
  "reuse_len": null,
  "same_length": false,
  "start_n_top": 5,
  "summary_activation": "tanh",
  "summary_last_dropout": 0.1,
  "summary_type": "last",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 250
    }
  },
  "untie_r": true,
  "vocab_size": 32000
}

[2021-10-23 14:58:12,536 INFO] loading weights file https://cdn.huggingface.co/xlnet-base-cased-pytorch_model.bin from cache at /home/azeredo/.cache/torch/transformers/33d6135fea0154c088449506a4c5f9553cb59b6fd040138417a7033af64bb8f9.7eac4fe898a021204e63c88c00ea68c60443c57f94b4bc3c02adbde6465745ac
[2021-10-23 14:58:14,071 INFO] All model checkpoint weights were used when initializing XLNetModel.

[2021-10-23 14:58:14,071 INFO] All the weights of XLNetModel were initialized from the model checkpoint at xlnet-base-cased.
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use XLNetModel for predictions without further training.
2021-10-23 14:58:15,714 Model Size: 116752172
Corpus: 5034 train + 129 dev + 3530 test sentences
2021-10-23 14:58:15,738 ----------------------------------------------------------------------------------------------------
2021-10-23 14:58:15,739 Model: "FastSequenceTagger(
  (embeddings): StackedEmbeddings(
    (list_embedding_0): TransformerWordEmbeddings(
      (model): XLNetModel(
        (word_embedding): Embedding(32000, 768)
        (layer): ModuleList(
          (0): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (1): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (2): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (3): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (4): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (5): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (6): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (7): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (8): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (9): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (10): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (11): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (dropout): Dropout(p=0.1, inplace=False)
      )
    )
  )
  (word_dropout): WordDropout(p=0.1)
  (linear): Linear(in_features=768, out_features=44, bias=True)
)"
2021-10-23 14:58:15,739 ----------------------------------------------------------------------------------------------------
2021-10-23 14:58:15,739 Corpus: "Corpus: 5034 train + 129 dev + 3530 test sentences"
2021-10-23 14:58:15,739 ----------------------------------------------------------------------------------------------------
2021-10-23 14:58:15,739 Parameters:
2021-10-23 14:58:15,739  - Optimizer: "AdamW"
2021-10-23 14:58:15,739  - learning_rate: "5e-06"
2021-10-23 14:58:15,739  - mini_batch_size: "1"
2021-10-23 14:58:15,740  - patience: "10"
2021-10-23 14:58:15,740  - anneal_factor: "0.5"
2021-10-23 14:58:15,740  - max_epochs: "10"
2021-10-23 14:58:15,740  - shuffle: "True"
2021-10-23 14:58:15,740  - train_with_dev: "False"
2021-10-23 14:58:15,740  - word min_freq: "-1"
2021-10-23 14:58:15,740 ----------------------------------------------------------------------------------------------------
2021-10-23 14:58:15,740 Model training base path: "resources/taggers/test-xlnet-base-cased"
2021-10-23 14:58:15,740 ----------------------------------------------------------------------------------------------------
2021-10-23 14:58:15,740 Device: cuda:0
2021-10-23 14:58:15,740 ----------------------------------------------------------------------------------------------------
2021-10-23 14:58:15,740 Embeddings storage mode: none
2021-10-23 14:58:16,362 ----------------------------------------------------------------------------------------------------
2021-10-23 14:58:16,365 Current loss interpolation: 1
['xlnet-base-cased_v2doc']
2021-10-23 14:58:16,769 epoch 1 - iter 0/5034 - loss 282.73196411 - samples/sec: 2.48 - decode_sents/sec: 1741.10
2021-10-23 15:00:02,011 epoch 1 - iter 503/5034 - loss 19.12329946 - samples/sec: 4.99 - decode_sents/sec: 1277852.76
2021-10-23 15:01:45,008 epoch 1 - iter 1006/5034 - loss 16.76969104 - samples/sec: 5.10 - decode_sents/sec: 1260295.65
2021-10-23 15:03:28,802 epoch 1 - iter 1509/5034 - loss 16.35974099 - samples/sec: 5.06 - decode_sents/sec: 1250583.82
2021-10-23 15:05:11,367 epoch 1 - iter 2012/5034 - loss 15.39498760 - samples/sec: 5.12 - decode_sents/sec: 1271690.72
2021-10-23 15:06:54,296 epoch 1 - iter 2515/5034 - loss 14.80468873 - samples/sec: 5.10 - decode_sents/sec: 1222326.14
2021-10-23 15:08:36,583 epoch 1 - iter 3018/5034 - loss 14.29490426 - samples/sec: 5.14 - decode_sents/sec: 1297499.95
2021-10-23 15:10:19,406 epoch 1 - iter 3521/5034 - loss 13.89466663 - samples/sec: 5.11 - decode_sents/sec: 1280956.23
2021-10-23 15:12:02,116 epoch 1 - iter 4024/5034 - loss 13.52090477 - samples/sec: 5.12 - decode_sents/sec: 1234485.03
2021-10-23 15:13:44,229 epoch 1 - iter 4527/5034 - loss 13.33001108 - samples/sec: 5.15 - decode_sents/sec: 1291938.10
2021-10-23 15:15:26,911 epoch 1 - iter 5030/5034 - loss 13.11416934 - samples/sec: 5.12 - decode_sents/sec: 1267869.54
2021-10-23 15:15:27,572 ----------------------------------------------------------------------------------------------------
2021-10-23 15:15:27,572 EPOCH 1 done: loss 3.2802 - lr 5e-06
2021-10-23 15:15:27,572 ----------------------------------------------------------------------------------------------------
[Sentence: "Esta e outras condições específicas da manifestação da informação como participante deste processo são estudadas neste artigo ." - 18 Tokens]
index 0 is out of bounds for dimension 0 with size 0
> /home/azeredo/test_ace/test_recent_ace/ACE/flair/embeddings.py(3733)add_document_embeddings_v2()
-> if self.pooling_operation == "last":
(Pdb)

The error index 0 is out of bounds for dimension 0 with size 0 seems to happen in this line.

harem_default.train
harem_default.dev
harem_default.test
ner_tags_harem.pkl
xlnet-doc-ner-test.yaml
You can find all files here.

I'm very thankful for your patience and help.

[EDIT]: I forgot to mention: this error does not occur when I'm using the CONLL2003 dataset, and that's confusing me because both datasets seem to be in an identical structure. In my custom dataset, the third column (chunking) has random values, I think that's not a problem since I don't remember reading anything about using this column to the output predictions.

@wangxinyu0922
Copy link
Member

This bug only occurs when the length of tokenized sentence exceeds the maximum subtoken length of the transformer embeddings. For example, this sentence in your dev set exceeds the limit of 512 subtokens.
image

I fixed this problem in the newest version of the code (embeddings.py), so you may not worry about the subtoken length in the newest version.

@jcazeredo
Copy link
Author

Thank you for your fast answer. Unfortunately, I only had time to test it today. It seems another error it's showing up.

/content/tcc# bash run_train.sh /content/tcc/ace_config_files/xlnet-base-doc.yaml
2021-10-25 22:56:02,380 Reading data from /content/tcc/data/harem_conll
2021-10-25 22:56:02,380 Train: /content/tcc/data/harem_conll/harem_default.train
2021-10-25 22:56:02,380 Dev: /content/tcc/data/harem_conll/harem_default.dev
2021-10-25 22:56:02,380 Test: /content/tcc/data/harem_conll/harem_default.test
2021-10-25 22:56:06,701 {b'<unk>': 0, b'<START>': 1, b'<STOP>': 2, b'O': 3, b'B-PESSOA': 4, b'I-PESSOA': 5, b'S-PESSOA': 6, b'E-PESSOA': 7, b'B-ORGANIZACAO': 8, b'I-ORGANIZACAO': 9, b'S-ORGANIZACAO': 10, b'E-ORGANIZACAO': 11, b'B-TEMPO': 12, b'I-TEMPO': 13, b'S-TEMPO': 14, b'E-TEMPO': 15, b'B-LOCAL': 16, b'I-LOCAL': 17, b'S-LOCAL': 18, b'E-LOCAL': 19, b'B-OBRA': 20, b'I-OBRA': 21, b'S-OBRA': 22, b'E-OBRA': 23, b'B-ACONTECIMENTO': 24, b'I-ACONTECIMENTO': 25, b'S-ACONTECIMENTO': 26, b'E-ACONTECIMENTO': 27, b'B-ABSTRACCAO': 28, b'I-ABSTRACCAO': 29, b'S-ABSTRACCAO': 30, b'E-ABSTRACCAO': 31, b'B-COISA': 32, b'I-COISA': 33, b'S-COISA': 34, b'E-COISA': 35, b'B-VALOR': 36, b'I-VALOR': 37, b'S-VALOR': 38, b'E-VALOR': 39, b'B-VARIADO': 40, b'I-VARIADO': 41, b'S-VARIADO': 42, b'E-VARIADO': 43}
2021-10-25 22:56:06,701 Corpus: 5155 train + 138 dev + 3658 test sentences
[2021-10-25 22:56:07,469 DEBUG] Attempting to acquire lock 140682438270672 on /root/.cache/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6.lock
[2021-10-25 22:56:07,470 DEBUG] Lock 140682438270672 acquired on /root/.cache/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6.lock
[2021-10-25 22:56:07,470 INFO] https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json not found in cache or force_download set to True, downloading to /root/.cache/torch/transformers/tmpuusfad4i
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 760/760 [00:00<00:00, 499kB/s]
[2021-10-25 22:56:08,235 INFO] storing https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json in cache at /root/.cache/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6
[2021-10-25 22:56:08,235 INFO] creating metadata file for /root/.cache/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6
[2021-10-25 22:56:08,236 DEBUG] Attempting to release lock 140682438270672 on /root/.cache/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6.lock
[2021-10-25 22:56:08,236 DEBUG] Lock 140682438270672 released on /root/.cache/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6.lock
[2021-10-25 22:56:08,236 INFO] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json from cache at /root/.cache/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6
[2021-10-25 22:56:08,237 INFO] Model config XLNetConfig {
  "architectures": [
    "XLNetLMHeadModel"
  ],
  "attn_type": "bi",
  "bi_data": false,
  "bos_token_id": 1,
  "clamp_len": -1,
  "d_head": 64,
  "d_inner": 3072,
  "d_model": 768,
  "dropout": 0.1,
  "end_n_top": 5,
  "eos_token_id": 2,
  "ff_activation": "gelu",
  "initializer_range": 0.02,
  "layer_norm_eps": 1e-12,
  "mem_len": null,
  "model_type": "xlnet",
  "n_head": 12,
  "n_layer": 12,
  "pad_token_id": 5,
  "reuse_len": null,
  "same_length": false,
  "start_n_top": 5,
  "summary_activation": "tanh",
  "summary_last_dropout": 0.1,
  "summary_type": "last",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 250
    }
  },
  "untie_r": true,
  "vocab_size": 32000
}

[2021-10-25 22:56:08,993 DEBUG] Attempting to acquire lock 140682432921936 on /root/.cache/torch/transformers/dad589d582573df0293448af5109cb6981ca77239ed314e15ca63b7b8a318ddd.8b10bd978b5d01c21303cc761fc9ecd464419b3bf921864a355ba807cfbfafa8.lock
[2021-10-25 22:56:08,994 DEBUG] Lock 140682432921936 acquired on /root/.cache/torch/transformers/dad589d582573df0293448af5109cb6981ca77239ed314e15ca63b7b8a318ddd.8b10bd978b5d01c21303cc761fc9ecd464419b3bf921864a355ba807cfbfafa8.lock
[2021-10-25 22:56:08,994 INFO] https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-spiece.model not found in cache or force_download set to True, downloading to /root/.cache/torch/transformers/tmpjsbq5tgb
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 798k/798k [00:01<00:00, 730kB/s]
[2021-10-25 22:56:10,876 INFO] storing https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-spiece.model in cache at /root/.cache/torch/transformers/dad589d582573df0293448af5109cb6981ca77239ed314e15ca63b7b8a318ddd.8b10bd978b5d01c21303cc761fc9ecd464419b3bf921864a355ba807cfbfafa8
[2021-10-25 22:56:10,876 INFO] creating metadata file for /root/.cache/torch/transformers/dad589d582573df0293448af5109cb6981ca77239ed314e15ca63b7b8a318ddd.8b10bd978b5d01c21303cc761fc9ecd464419b3bf921864a355ba807cfbfafa8
[2021-10-25 22:56:10,876 DEBUG] Attempting to release lock 140682432921936 on /root/.cache/torch/transformers/dad589d582573df0293448af5109cb6981ca77239ed314e15ca63b7b8a318ddd.8b10bd978b5d01c21303cc761fc9ecd464419b3bf921864a355ba807cfbfafa8.lock
[2021-10-25 22:56:10,877 DEBUG] Lock 140682432921936 released on /root/.cache/torch/transformers/dad589d582573df0293448af5109cb6981ca77239ed314e15ca63b7b8a318ddd.8b10bd978b5d01c21303cc761fc9ecd464419b3bf921864a355ba807cfbfafa8.lock
[2021-10-25 22:56:10,877 INFO] loading file https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-spiece.model from cache at /root/.cache/torch/transformers/dad589d582573df0293448af5109cb6981ca77239ed314e15ca63b7b8a318ddd.8b10bd978b5d01c21303cc761fc9ecd464419b3bf921864a355ba807cfbfafa8
[2021-10-25 22:56:11,676 INFO] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json from cache at /root/.cache/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6
[2021-10-25 22:56:11,677 INFO] Model config XLNetConfig {
  "architectures": [
    "XLNetLMHeadModel"
  ],
  "attn_type": "bi",
  "bi_data": false,
  "bos_token_id": 1,
  "clamp_len": -1,
  "d_head": 64,
  "d_inner": 3072,
  "d_model": 768,
  "dropout": 0.1,
  "end_n_top": 5,
  "eos_token_id": 2,
  "ff_activation": "gelu",
  "initializer_range": 0.02,
  "layer_norm_eps": 1e-12,
  "mem_len": null,
  "model_type": "xlnet",
  "n_head": 12,
  "n_layer": 12,
  "output_hidden_states": true,
  "pad_token_id": 5,
  "reuse_len": null,
  "same_length": false,
  "start_n_top": 5,
  "summary_activation": "tanh",
  "summary_last_dropout": 0.1,
  "summary_type": "last",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 250
    }
  },
  "untie_r": true,
  "vocab_size": 32000
}

[2021-10-25 22:56:11,780 DEBUG] Attempting to acquire lock 140682086723536 on /root/.cache/torch/transformers/33d6135fea0154c088449506a4c5f9553cb59b6fd040138417a7033af64bb8f9.7eac4fe898a021204e63c88c00ea68c60443c57f94b4bc3c02adbde6465745ac.lock
[2021-10-25 22:56:11,780 DEBUG] Lock 140682086723536 acquired on /root/.cache/torch/transformers/33d6135fea0154c088449506a4c5f9553cb59b6fd040138417a7033af64bb8f9.7eac4fe898a021204e63c88c00ea68c60443c57f94b4bc3c02adbde6465745ac.lock
[2021-10-25 22:56:11,780 INFO] https://cdn.huggingface.co/xlnet-base-cased-pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/torch/transformers/tmpntn_k5u7
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 467M/467M [00:11<00:00, 39.8MB/s]
[2021-10-25 22:56:23,569 INFO] storing https://cdn.huggingface.co/xlnet-base-cased-pytorch_model.bin in cache at /root/.cache/torch/transformers/33d6135fea0154c088449506a4c5f9553cb59b6fd040138417a7033af64bb8f9.7eac4fe898a021204e63c88c00ea68c60443c57f94b4bc3c02adbde6465745ac
[2021-10-25 22:56:23,570 INFO] creating metadata file for /root/.cache/torch/transformers/33d6135fea0154c088449506a4c5f9553cb59b6fd040138417a7033af64bb8f9.7eac4fe898a021204e63c88c00ea68c60443c57f94b4bc3c02adbde6465745ac
[2021-10-25 22:56:23,570 DEBUG] Attempting to release lock 140682086723536 on /root/.cache/torch/transformers/33d6135fea0154c088449506a4c5f9553cb59b6fd040138417a7033af64bb8f9.7eac4fe898a021204e63c88c00ea68c60443c57f94b4bc3c02adbde6465745ac.lock
[2021-10-25 22:56:23,570 DEBUG] Lock 140682086723536 released on /root/.cache/torch/transformers/33d6135fea0154c088449506a4c5f9553cb59b6fd040138417a7033af64bb8f9.7eac4fe898a021204e63c88c00ea68c60443c57f94b4bc3c02adbde6465745ac.lock
[2021-10-25 22:56:23,570 INFO] loading weights file https://cdn.huggingface.co/xlnet-base-cased-pytorch_model.bin from cache at /root/.cache/torch/transformers/33d6135fea0154c088449506a4c5f9553cb59b6fd040138417a7033af64bb8f9.7eac4fe898a021204e63c88c00ea68c60443c57f94b4bc3c02adbde6465745ac
[2021-10-25 22:56:25,831 INFO] All model checkpoint weights were used when initializing XLNetModel.

[2021-10-25 22:56:25,832 INFO] All the weights of XLNetModel were initialized from the model checkpoint at xlnet-base-cased.
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use XLNetModel for predictions without further training.
2021-10-25 22:56:38,416 Model Size: 116752172
Corpus: 5034 train + 130 dev + 3530 test sentences
2021-10-25 22:56:38,448 ----------------------------------------------------------------------------------------------------
2021-10-25 22:56:38,449 Model: "FastSequenceTagger(
  (embeddings): StackedEmbeddings(
    (list_embedding_0): TransformerWordEmbeddings(
      (model): XLNetModel(
        (word_embedding): Embedding(32000, 768)
        (layer): ModuleList(
          (0): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (1): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (2): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (3): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (4): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (5): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (6): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (7): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (8): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (9): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (10): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (11): XLNetLayer(
            (rel_attn): XLNetRelativeAttention(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (ff): XLNetFeedForward(
              (layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (layer_1): Linear(in_features=768, out_features=3072, bias=True)
              (layer_2): Linear(in_features=3072, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (dropout): Dropout(p=0.1, inplace=False)
      )
    )
  )
  (word_dropout): WordDropout(p=0.1)
  (linear): Linear(in_features=768, out_features=44, bias=True)
)"
2021-10-25 22:56:38,450 ----------------------------------------------------------------------------------------------------
2021-10-25 22:56:38,450 Corpus: "Corpus: 5034 train + 130 dev + 3530 test sentences"
2021-10-25 22:56:38,450 ----------------------------------------------------------------------------------------------------
2021-10-25 22:56:38,450 Parameters:
2021-10-25 22:56:38,450  - Optimizer: "AdamW"
2021-10-25 22:56:38,450  - learning_rate: "5e-06"
2021-10-25 22:56:38,450  - mini_batch_size: "1"
2021-10-25 22:56:38,451  - patience: "10"
2021-10-25 22:56:38,451  - anneal_factor: "0.5"
2021-10-25 22:56:38,451  - max_epochs: "10"
2021-10-25 22:56:38,451  - shuffle: "True"
2021-10-25 22:56:38,451  - train_with_dev: "False"
2021-10-25 22:56:38,451  - word min_freq: "-1"
2021-10-25 22:56:38,451 ----------------------------------------------------------------------------------------------------
2021-10-25 22:56:38,452 Model training base path: "/content/tcc/ACE/resources/taggers/xlnet-base-finetuned-doc"
2021-10-25 22:56:38,452 ----------------------------------------------------------------------------------------------------
2021-10-25 22:56:38,452 Device: cuda:0
2021-10-25 22:56:38,452 ----------------------------------------------------------------------------------------------------
2021-10-25 22:56:38,452 Embeddings storage mode: none
2021-10-25 22:56:39,248 ----------------------------------------------------------------------------------------------------
2021-10-25 22:56:39,253 Current loss interpolation: 1
['xlnet-base-cased_v2doc']
2021-10-25 22:56:39,526 epoch 1 - iter 0/5034 - loss 173.06898499 - samples/sec: 3.67 - decode_sents/sec: 263.89
2021-10-25 22:57:42,339 epoch 1 - iter 503/5034 - loss 21.69958575 - samples/sec: 8.53 - decode_sents/sec: 1037234.47
2021-10-25 22:58:39,224 epoch 1 - iter 1006/5034 - loss 17.54147579 - samples/sec: 9.47 - decode_sents/sec: 1095966.19
2021-10-25 22:59:35,897 epoch 1 - iter 1509/5034 - loss 16.09543247 - samples/sec: 9.52 - decode_sents/sec: 804321.35
2021-10-25 23:00:32,237 epoch 1 - iter 2012/5034 - loss 15.40342070 - samples/sec: 9.58 - decode_sents/sec: 811124.53
2021-10-25 23:01:28,478 epoch 1 - iter 2515/5034 - loss 14.86113683 - samples/sec: 9.60 - decode_sents/sec: 876499.76
2021-10-25 23:02:24,643 epoch 1 - iter 3018/5034 - loss 14.23401209 - samples/sec: 9.60 - decode_sents/sec: 1081360.80
2021-10-25 23:03:21,115 epoch 1 - iter 3521/5034 - loss 14.03165655 - samples/sec: 9.56 - decode_sents/sec: 981728.67
2021-10-25 23:04:17,405 epoch 1 - iter 4024/5034 - loss 13.73906589 - samples/sec: 9.59 - decode_sents/sec: 936411.41
2021-10-25 23:05:13,911 epoch 1 - iter 4527/5034 - loss 13.37011227 - samples/sec: 9.54 - decode_sents/sec: 861819.82
2021-10-25 23:06:10,821 epoch 1 - iter 5030/5034 - loss 13.24819737 - samples/sec: 9.48 - decode_sents/sec: 939329.88
2021-10-25 23:06:11,204 ----------------------------------------------------------------------------------------------------
2021-10-25 23:06:11,205 EPOCH 1 done: loss 3.3130 - lr 5e-06
2021-10-25 23:06:11,205 ----------------------------------------------------------------------------------------------------
2021-10-25 23:06:16,174 Macro Average: 34.38    Macro avg loss: 16.14
ColumnCorpus-1  34.38
2021-10-25 23:06:16,177 ----------------------------------------------------------------------------------------------------
2021-10-25 23:06:16,177 BAD EPOCHS (no improvement): 11
2021-10-25 23:06:16,177 GLOBAL BAD EPOCHS (no improvement): 0
2021-10-25 23:06:16,177 ==================Saving the current best model: 34.38==================
2021-10-25 23:06:17,697 ==================Saving the best language model: 34.38==================
[2021-10-25 23:06:17,701 INFO] Configuration saved in /content/tcc/ACE/resources/taggers/xlnet-base-finetuned-doc/xlnet-base-cased_v2doc/config.json
[2021-10-25 23:06:19,510 INFO] Model weights saved in /content/tcc/ACE/resources/taggers/xlnet-base-finetuned-doc/xlnet-base-cased_v2doc/pytorch_model.bin
2021-10-25 23:06:19,511 ----------------------------------------------------------------------------------------------------
2021-10-25 23:06:19,518 Current loss interpolation: 1
['xlnet-base-cased_v2doc']
2021-10-25 23:06:19,633 epoch 2 - iter 0/5034 - loss 3.79385304 - samples/sec: 8.69 - decode_sents/sec: 913.99
2021-10-25 23:07:16,711 epoch 2 - iter 503/5034 - loss 11.27529993 - samples/sec: 9.46 - decode_sents/sec: 747073.27
2021-10-25 23:08:13,573 epoch 2 - iter 1006/5034 - loss 11.36842699 - samples/sec: 9.49 - decode_sents/sec: 724745.76
2021-10-25 23:09:10,274 epoch 2 - iter 1509/5034 - loss 10.85687287 - samples/sec: 9.52 - decode_sents/sec: 910939.08
2021-10-25 23:10:07,269 epoch 2 - iter 2012/5034 - loss 10.61978362 - samples/sec: 9.47 - decode_sents/sec: 614545.56
2021-10-25 23:11:03,874 epoch 2 - iter 2515/5034 - loss 10.74376967 - samples/sec: 9.54 - decode_sents/sec: 982185.71
2021-10-25 23:12:00,804 epoch 2 - iter 3018/5034 - loss 10.59236796 - samples/sec: 9.47 - decode_sents/sec: 882365.08
2021-10-25 23:12:57,251 epoch 2 - iter 3521/5034 - loss 10.48515464 - samples/sec: 9.56 - decode_sents/sec: 922893.66
2021-10-25 23:13:54,211 epoch 2 - iter 4024/5034 - loss 10.41603058 - samples/sec: 9.47 - decode_sents/sec: 879056.21
2021-10-25 23:14:51,117 epoch 2 - iter 4527/5034 - loss 10.35569958 - samples/sec: 9.48 - decode_sents/sec: 596475.80
2021-10-25 23:15:48,052 epoch 2 - iter 5030/5034 - loss 10.38159148 - samples/sec: 9.47 - decode_sents/sec: 894334.43
2021-10-25 23:15:48,435 ----------------------------------------------------------------------------------------------------
2021-10-25 23:15:48,435 EPOCH 2 done: loss 2.5969 - lr 4.5e-06
2021-10-25 23:15:48,436 ----------------------------------------------------------------------------------------------------
Traceback (most recent call last):
  File "./ACE/train.py", line 360, in <module>
    getattr(trainer,'train')(**train_config)
  File "/content/tcc/ACE/flair/trainers/finetune_trainer.py", line 778, in train
    embeddings_storage_mode=embeddings_storage_mode,
  File "/content/tcc/ACE/flair/models/sequence_tagger_model.py", line 2270, in evaluate
    del batch.features
AttributeError: features
/content/tcc# 

@wangxinyu0922
Copy link
Member

Oops, fixed that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants