Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
-
Updated
Jan 1, 2019 - Python
Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
PyTorch port of BERT ML model
Short overview on the must popular models for Named Entity Recognition
Implementation of BERT that could load official pre-trained models for feature extraction and prediction
Token and Sentence Level Classification with Google's BERT (TensorFlow)
Multiple-Relations-Extraction-Only-Look-Once. Just look at the sentence once and extract the multiple pairs of entities and their corresponding relations. 端到端联合多关系抽取模型,可用于 http://lic2019.ccf.org.cn/kg 信息抽取。
使用BERT模型做文本分类;面向工业用途
Google BERT implementation on pytorch-template
TensorFlow code for a lite bert reimplementation
Trying to adapt BERT for images
Multi-label classifier based on BERT by Pytorch
A Named Entity Recognition project using Pre-trained BERT model
A binary classification task for identifying speakers in a dialogue, training using a RNN with attention and BERT on data from the British parliment.
This project uses BERT(Bidirectional Encoder Representations from Transformers) for Yelp-5 fine-grained sentiment analysis. It also explores various custom loss functions for regression based approaches of fine-grained sentiment analysis.
top1-solution
轉換好的 Albert 中文模型 (for pytorch-transformers)
Add a description, image, and links to the bert-model topic page so that developers can more easily learn about it.
To associate your repository with the bert-model topic, visit your repo's landing page and select "manage topics."