Skip to content

MohamedAteya/BERT-Fine-Tuning-Sentence-Classification-for-CoLA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

BERT Fine Tuning on Sentence Classification for CoLA Dataset

Dataset

The Corpus of Linguistic Acceptability (CoLA) in its full form consists of 10657 sentences from 23 linguistics publications, expertly annotated for acceptability (grammaticality) by their original authors. The public version provided here contains 9594 sentences belonging to training and development sets, and excludes 1063 sentences belonging to a held out test set.

BERT

BERT (Bidirectional Encoder Representations from Transformers), released in late 2018, BERT is a method of pretraining language representations that was used to create models that NLP practicioners can then download and use. we can either use these models to extract high quality language features from text data, or we can fine-tune these models on a specific task (classification, entity recognition, question answering, etc.).

Reference

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published