中文预训练XLNet模型: Pre-Trained Chinese XLNet_Large
-
Updated
Sep 13, 2019 - Python
中文预训练XLNet模型: Pre-Trained Chinese XLNet_Large
transform multi-label classification as sentence pair task, with more training data and information
this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large
OOD Generalization and Detection (ACL 2020)
Source code for our "MMM" paper at AAAI 2020
Implementation of the semi-structured inference model in our ACL 2020 paper. INFOTABS: Inference on Tables as Semi-structured Data
2020 北京数据开放大赛
Pytorch-Named-Entity-Recognition-with-transformers
APOLLO-1: Online Toxicity Detection
高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型
BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based method of learning language representations. It is a bidirectional transformer pre-trained model developed using a combination of two tasks namely: masked language modeling objective and next sentence prediction on a large corpus.
Classification of multilingual dataset trained only on English training data using pre-trained models. Model is trained on TPUs using PyTorch and torch_xla library.
End-to-end integration of HuggingFace's models for sequence labeling.
2020阿里云天池大数据竞赛-中医药命名实体识别挑战赛
Add a description, image, and links to the roberta topic page so that developers can more easily learn about it.
To associate your repository with the roberta topic, visit your repo's landing page and select "manage topics."