This repository provides transfer fine-tuned BERT models by paraphrases, which show strong performance in sentence pair modeling tasks, e.g., paraphrase identification, semantic textual similarity aseessment, and natural language inference. For details of model training, please refer to the following paper.
Yuki Arase and Junichi Tsujii. 2019. Transfer Fine-Tuning: A BERT Case Study. in Proc. of Conference on Empirical Methods in Natural Language Processing (EMNLP 2019). paper at arXiv
When you have any publication using our models, please cite the paper above.
Please download the trained models at Zenodo
Our training codes depend on pytorch-pretrained-bert (former version of transformers). Please first install pytorch and pytorch-pretrained-bert.
Note: Our models should work on transformers as well without any problem, but we have not tested yet.
Once you have pytorch and pytorch-pretrained-bert installed, our models can be used in the same manner with BERT's pre-trained models. Just load our models like:
self.bert = BertModel.from_pretrained('bert-base-uncased')
self.load_state_dict(torch.load('path-to-downloaded-model'))