Skip to content
zhezhaoa edited this page Nov 20, 2022 · 6 revisions

Here is a short summary of our solution on GLUE classification benchmark. This section mainly focuses on single model. One can obtain the pre-trained models used below from here.

CoLA

The example of fine-tuning and doing inference on CoLA dataset with English BERT-Base-uncased:

python3 finetune/run_classifier.py --pretrained_model_path models/bert_base_en_uncased_model.bin \
                                   --vocab_path models/google_uncased_en_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/CoLA/train.tsv \
                                   --dev_path datasets/CoLA/dev.tsv \
                                   --output_model_path models/cola_classifier_model.bin \
                                   --epochs_num 5 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/cola_classifier_model.bin \
                                          --vocab_path models/google_uncased_en_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/CoLA/test_nolabel.tsv \
                                          --prediction_path datasets/CoLA/prediction.tsv \
                                          --seq_length 128 --labels_num 2

The example of fine-tuning and doing inference on CoLA dataset with English RoBERTa-Large: Since RoBERTa-Large uses different special tokens, it is necessary to change the path of special token mapping from models/special_tokens_map.json to models/xlmroberta_special_tokens_map.json in tencentpretrain/utils/constants.py

python3 finetune/run_classifier.py --pretrained_model_path models/roberta_large_en_model.bin \
                                   --vocab_path models/huggingface_gpt2_vocab.txt \
                                   --merges_path models/huggingface_gpt2_merges.txt \
                                   --tokenizer bpe \
                                   --config_path models/xlm-roberta/large_config.json \
                                   --train_path datasets/CoLA/train.tsv \
                                   --dev_path datasets/CoLA/dev.tsv \
                                   --output_model_path models/cola_classifier_model.bin \
                                   --epochs_num 5 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/cola_classifier_model.bin \
                                          --vocab_path models/huggingface_gpt2_vocab.txt \
                                          --merges_path models/huggingface_gpt2_merges.txt \
                                          --tokenizer bpe \
                                          --config_path models/xlm-roberta/large_config.json \
                                          --test_path datasets/CoLA/test_nolabel.tsv \
                                          --prediction_path datasets/CoLA/prediction.tsv \
                                          --seq_length 128 --labels_num 2

SST-2

The example of fine-tuning and doing inference on SST-2 dataset with English BERT-Base-uncased:

python3 finetune/run_classifier.py --pretrained_model_path models/bert_base_en_uncased_model.bin \
                                   --vocab_path models/google_uncased_en_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/SST-2/train.tsv \
                                   --dev_path datasets/SST-2/dev.tsv \
                                   --output_model_path models/sst-2_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/sst-2_classifier_model.bin \
                                          --vocab_path models/google_uncased_en_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/SST-2/test_nolabel.tsv \
                                          --prediction_path datasets/SST-2/prediction.tsv \
                                          --seq_length 128 --labels_num 2

The example of fine-tuning and doing inference on SST-2 dataset with English RoBERTa-Large:

python3 finetune/run_classifier.py --pretrained_model_path models/roberta_large_en_model.bin \
                                   --vocab_path models/huggingface_gpt2_vocab.txt \
                                   --merges_path models/huggingface_gpt2_merges.txt \
                                   --tokenizer bpe \
                                   --config_path models/xlm-roberta/large_config.json \
                                   --train_path datasets/SST-2/train.tsv \
                                   --dev_path datasets/SST-2/dev.tsv \
                                   --output_model_path models/sst-2_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/sst-2_classifier_model.bin \
                                          --vocab_path models/huggingface_gpt2_vocab.txt \
                                          --merges_path models/huggingface_gpt2_merges.txt \
                                          --tokenizer bpe \
                                          --config_path models/xlm-roberta/large_config.json \
                                          --test_path datasets/SST-2/test_nolabel.tsv \
                                          --prediction_path datasets/SST-2/prediction.tsv \
                                          --seq_length 128 --labels_num 2

QQP

The example of fine-tuning and doing inference on QQP dataset with English BERT-Base-uncased:

python3 finetune/run_classifier.py --pretrained_model_path models/bert_base_en_uncased_model.bin \
                                   --vocab_path models/google_uncased_en_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/QQP/train.tsv \
                                   --dev_path datasets/QQP/dev.tsv \
                                   --output_model_path models/qqp_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/qqp_classifier_model.bin \
                                          --vocab_path models/google_uncased_en_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/QQP/test_nolabel.tsv \
                                          --prediction_path datasets/QQP/prediction.tsv \
                                          --seq_length 128 --labels_num 2

The example of fine-tuning and doing inference on QQP dataset with English RoBERTa-Large:

python3 finetune/run_classifier.py --pretrained_model_path models/roberta_large_en_model.bin \
                                   --vocab_path models/huggingface_gpt2_vocab.txt \
                                   --merges_path models/huggingface_gpt2_merges.txt \
                                   --tokenizer bpe \
                                   --config_path models/xlm-roberta/large_config.json \
                                   --train_path datasets/QQP/train.tsv \
                                   --dev_path datasets/QQP/dev.tsv \
                                   --output_model_path models/qqp_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/qqp_classifier_model.bin \
                                          --vocab_path models/huggingface_gpt2_vocab.txt \
                                          --merges_path models/huggingface_gpt2_merges.txt \
                                          --tokenizer bpe \
                                          --config_path models/xlm-roberta/large_config.json \
                                          --test_path datasets/QQP/test_nolabel.tsv \
                                          --prediction_path datasets/QQP/prediction.tsv \
                                          --seq_length 128 --labels_num 2

QNLI

The example of fine-tuning and doing inference on QNLI dataset with English BERT-Base-uncased:

python3 finetune/run_classifier.py --pretrained_model_path models/bert_base_en_uncased_model.bin \
                                   --vocab_path models/google_uncased_en_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/QNLI/train.tsv \
                                   --dev_path datasets/QNLI/dev.tsv \
                                   --output_model_path models/qnli_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/qnli_classifier_model.bin \
                                          --vocab_path models/google_uncased_en_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/QNLI/test_nolabel.tsv \
                                          --prediction_path datasets/QNLI/prediction.tsv \
                                          --seq_length 128 --labels_num 2

The example of fine-tuning and doing inference on QNLI dataset with English RoBERTa-Large:

python3 finetune/run_classifier.py --pretrained_model_path models/roberta_large_en_model.bin \
                                   --vocab_path models/huggingface_gpt2_vocab.txt \
                                   --merges_path models/huggingface_gpt2_merges.txt \
                                   --tokenizer bpe \
                                   --config_path models/xlm-roberta/large_config.json \
                                   --train_path datasets/QNLI/train.tsv \
                                   --dev_path datasets/QNLI/dev.tsv \
                                   --output_model_path models/qnli_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/qnli_classifier_model.bin \
                                          --vocab_path models/huggingface_gpt2_vocab.txt \
                                          --merges_path models/huggingface_gpt2_merges.txt \
                                          --tokenizer bpe \
                                          --config_path models/xlm-roberta/large_config.json \
                                          --test_path datasets/QNLI/test_nolabel.tsv \
                                          --prediction_path datasets/QNLI/prediction.tsv \
                                          --seq_length 128 --labels_num 2

WNLI

The example of fine-tuning and doing inference on WNLI dataset with English BERT-Base-uncased:

python3 finetune/run_classifier.py --pretrained_model_path models/bert_base_en_uncased_model.bin \
                                   --vocab_path models/google_uncased_en_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/WNLI/train.tsv \
                                   --dev_path datasets/WNLI/dev.tsv \
                                   --output_model_path models/wnli_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/wnli_classifier_model.bin \
                                          --vocab_path models/google_uncased_en_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/WNLI/test_nolabel.tsv \
                                          --prediction_path datasets/WNLI/prediction.tsv \
                                          --seq_length 128 --labels_num 2

The example of fine-tuning and doing inference on WNLI dataset with English RoBERTa-Large:

python3 finetune/run_classifier.py --pretrained_model_path models/roberta_large_en_model.bin \
                                   --vocab_path models/huggingface_gpt2_vocab.txt \
                                   --merges_path models/huggingface_gpt2_merges.txt \
                                   --tokenizer bpe \
                                   --config_path models/xlm-roberta/large_config.json \
                                   --train_path datasets/WNLI/train.tsv \
                                   --dev_path datasets/WNLI/dev.tsv \
                                   --output_model_path models/wnli_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/wnli_classifier_model.bin \
                                          --vocab_path models/huggingface_gpt2_vocab.txt \
                                          --merges_path models/huggingface_gpt2_merges.txt \
                                          --tokenizer bpe \
                                          --config_path models/xlm-roberta/large_config.json \
                                          --test_path datasets/WNLI/test_nolabel.tsv \
                                          --prediction_path datasets/WNLI/prediction.tsv \
                                          --seq_length 128 --labels_num 2

MNLI

MNLI dataset consists of two development sets and two test sets. They are respectively named MNLI-m and MNLI-mm. The example of fine-tuning and doing inference on MNLI-m and MNLI-mm with English BERT-Base-uncased:

python3 finetune/run_classifier.py --pretrained_model_path models/bert_base_en_uncased_model.bin \
                                   --vocab_path models/google_uncased_en_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/MNLI/train.tsv \
                                   --dev_path datasets/MNLI/dev_matched.tsv \
                                   --output_model_path models/mnli_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/mnli_classifier_model.bin \
                                          --vocab_path models/google_uncased_en_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/MNLI/test_nolabel_matched.tsv \
                                          --prediction_path datasets/MNLI/prediction_matched.tsv \
                                          --seq_length 128 --labels_num 3

python3 finetune/run_classifier.py --pretrained_model_path models/bert_base_en_uncased_model.bin \
                                   --vocab_path models/google_uncased_en_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/MNLI/train.tsv \
                                   --dev_path datasets/MNLI/dev_mismatched.tsv \
                                   --output_model_path models/mnli_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/mnli_classifier_model.bin \
                                          --vocab_path models/google_uncased_en_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/MNLI/test_nolabel_mismatched.tsv \
                                          --prediction_path datasets/MNLI/prediction_mismatched.tsv \
                                          --seq_length 128 --labels_num 3

The example of fine-tuning and doing inference on MNLI-m and MNLI-mm with English RoBERTa-Large:

python3 finetune/run_classifier.py --pretrained_model_path models/roberta_large_en_model.bin \
                                   --vocab_path models/huggingface_gpt2_vocab.txt \
                                   --merges_path models/huggingface_gpt2_merges.txt \
                                   --tokenizer bpe \
                                   --config_path models/xlm-roberta/large_config.json \
                                   --train_path datasets/MNLI/train.tsv \
                                   --dev_path datasets/MNLI/dev_matched.tsv \
                                   --output_model_path models/mnli_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/mnli_classifier_model.bin \
                                          --vocab_path models/huggingface_gpt2_vocab.txt \
                                          --merges_path models/huggingface_gpt2_merges.txt \
                                          --tokenizer bpe \
                                          --config_path models/xlm-roberta/large_config.json \
                                          --test_path datasets/MNLI/test_nolabel_matched.tsv \
                                          --prediction_path datasets/MNLI/prediction_matched.tsv \
                                          --seq_length 128 --labels_num 3

python3 finetune/run_classifier.py --pretrained_model_path models/roberta_large_en_model.bin \
                                   --vocab_path models/huggingface_gpt2_vocab.txt \
                                   --merges_path models/huggingface_gpt2_merges.txt \
                                   --tokenizer bpe \
                                   --config_path models/xlm-roberta/large_config.json \
                                   --train_path datasets/MNLI/train.tsv \
                                   --dev_path datasets/MNLI/dev_mismatched.tsv \
                                   --output_model_path models/mnli_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/mnli_classifier_model.bin \
                                          --vocab_path models/huggingface_gpt2_vocab.txt \
                                          --merges_path models/huggingface_gpt2_merges.txt \
                                          --tokenizer bpe \
                                          --config_path models/xlm-roberta/large_config.json \
                                          --test_path datasets/MNLI/test_nolabel_mismatched.tsv \
                                          --prediction_path datasets/MNLI/prediction_mismatched.tsv \
                                          --seq_length 128 --labels_num 3

It is pointed out in RoBERTa that better results can be achieved for MRPC, RTE, and STS-B by fine-tuning upon MNLI-m model. We follow this setting. Since MRPC, RTE, STS-B have different targets with MNLI, we remove the target layer of MNLI model before using it.

import torch
input_model = torch.load("models/mnli_classifier_model.bin", map_location="cpu")
for key in list(input_model.keys()):
    if "output_layer_2" in key:
        del input_model[key]
torch.save(input_model, "models/mnli_classifier_delete_target_model.bin")

MRPC

The example of fine-tuning and doing inference on MRPC dataset with English BERT-Base-uncased:

python3 finetune/run_classifier.py --pretrained_model_path models/bert_base_en_uncased_model.bin \
                                   --vocab_path models/google_uncased_en_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/MRPC/train.tsv \
                                   --dev_path datasets/MRPC/dev.tsv \
                                   --output_model_path models/mrpc_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/mrpc_classifier_model.bin \
                                          --vocab_path models/google_uncased_en_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/MRPC/test_nolabel.tsv \
                                          --prediction_path datasets/MRPC/prediction.tsv \
                                          --seq_length 128 --labels_num 2

The example of fine-tuning and doing inference on MRPC dataset with English RoBERTa-Large fine-tuned on MNLI-m:

python3 finetune/run_classifier.py --pretrained_model_path models/mnli_classifier_delete_target_model.bin \
                                   --vocab_path models/huggingface_gpt2_vocab.txt \
                                   --merges_path models/huggingface_gpt2_merges.txt \
                                   --tokenizer bpe \
                                   --config_path models/xlm-roberta/large_config.json \
                                   --train_path datasets/MRPC/train.tsv \
                                   --dev_path datasets/MRPC/dev.tsv \
                                   --output_model_path models/mrpc_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/mrpc_classifier_model.bin \
                                          --vocab_path models/huggingface_gpt2_vocab.txt \
                                          --merges_path models/huggingface_gpt2_merges.txt \
                                          --tokenizer bpe \
                                          --config_path models/xlm-roberta/large_config.json \
                                          --test_path datasets/MRPC/test_nolabel.tsv \
                                          --prediction_path datasets/MRPC/prediction.tsv \
                                          --seq_length 128 --labels_num 2

RTE

The example of fine-tuning and doing inference on RTE dataset with English BERT-Base-uncased:

python3 finetune/run_classifier.py --pretrained_model_path models/bert_base_en_uncased_model.bin \
                                   --vocab_path models/google_uncased_en_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/RTE/train.tsv \
                                   --dev_path datasets/RTE/dev.tsv \
                                   --output_model_path models/rte_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/rte_classifier_model.bin \
                                          --vocab_path models/google_uncased_en_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/RTE/test_nolabel.tsv \
                                          --prediction_path datasets/RTE/prediction.tsv \
                                          --seq_length 128 --labels_num 2

The example of fine-tuning and doing inference on RTE dataset with English RoBERTa-Large fine-tuned on MNLI-m:

python3 finetune/run_classifier.py --pretrained_model_path models/mnli_classifier_delete_target_model.bin \
                                   --vocab_path models/huggingface_gpt2_vocab.txt \
                                   --merges_path models/huggingface_gpt2_merges.txt \
                                   --tokenizer bpe \
                                   --config_path models/xlm-roberta/large_config.json \
                                   --train_path datasets/RTE/train.tsv \
                                   --dev_path datasets/RTE/dev.tsv \
                                   --output_model_path models/rte_classifier_model.bin \
                                   --epochs_num 3 --batch_size 32

python3 inference/run_classifier_infer.py --load_model_path models/rte_classifier_model.bin \
                                          --vocab_path models/huggingface_gpt2_vocab.txt \
                                          --merges_path models/huggingface_gpt2_merges.txt \
                                          --tokenizer bpe \
                                          --config_path models/xlm-roberta/large_config.json \
                                          --test_path datasets/RTE/test_nolabel.tsv \
                                          --prediction_path datasets/RTE/prediction.tsv \
                                          --seq_length 128 --labels_num 2

STS-B

The example of fine-tuning and doing inference on STS-B dataset with English BERT-Base-uncased:

python3 finetune/run_regression.py --pretrained_model_path models/bert_base_en_uncased_model.bin \
                                   --vocab_path models/google_uncased_en_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/STS-B/train.tsv \
                                   --dev_path datasets/STS-B/dev.tsv \
                                   --output_model_path models/sts-b_regression_model.bin \
                                   --epochs_num 5 --batch_size 32

python3 inference/run_regression_infer.py --load_model_path models/sts-b_regression_model.bin \
                                          --vocab_path models/google_uncased_en_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/STS-B/test_nolabel.tsv \
                                          --prediction_path datasets/STS-B/prediction.tsv \
                                          --seq_length 128 

The example of fine-tuning and doing inference on STS-B dataset with English RoBERTa-Large fine-tuned on MNLI-m:

python3 finetune/run_regression.py --pretrained_model_path models/mnli_classifier_delete_target_model.bin \
                                   --vocab_path models/huggingface_gpt2_vocab.txt \
                                   --merges_path models/huggingface_gpt2_merges.txt \
                                   --tokenizer bpe \
                                   --config_path models/xlm-roberta/large_config.json \
                                   --train_path datasets/STS-B/train.tsv \
                                   --dev_path datasets/STS-B/dev.tsv \
                                   --output_model_path models/sts-b_regression_model.bin \
                                   --epochs_num 5 --batch_size 32

python3 inference/run_regression_infer.py --load_model_path models/sts-b_regression_model.bin \
                                          --vocab_path models/huggingface_gpt2_vocab.txt \
                                          --merges_path models/huggingface_gpt2_merges.txt \
                                          --tokenizer bpe \
                                          --config_path models/xlm-roberta/large_config.json \
                                          --test_path datasets/STS-B/test_nolabel.tsv \
                                          --prediction_path datasets/STS-B/prediction.tsv \
                                          --seq_length 128 
Clone this wiki locally