Skip to content

Latest commit

 

History

History
71 lines (47 loc) · 3.87 KB

File metadata and controls

71 lines (47 loc) · 3.87 KB

BERTweet: A pre-trained language model for English Tweets

  • BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the RoBERTa pre-training procedure, using the same model configuration as BERT-base.
  • The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related the COVID-19 pandemic.
  • BERTweet does better than its competitors RoBERTa-base and XLM-R-base and outperforms previous state-of-the-art models on three downstream Tweet NLP tasks of Part-of-speech tagging, Named entity recognition and text classification.

The general architecture and experimental results of BERTweet can be found in our EMNLP-2020 demo paper:

@inproceedings{bertweet,
title     = {{BERTweet: A pre-trained language model for English Tweets}},
author    = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year      = {2020}
}

Please CITE our paper when BERTweet is used to help produce published results or is incorporated into other software.

For further information or requests, please go to BERTweet's homepage!

Installation

  • Python version >= 3.6
  • PyTorch version >= 1.4.0
  • pip3 install transformers emoji

Pre-trained model

Model #params Arch. Pre-training data
vinai/bertweet-base 135M base 845M English Tweets (80GB)

Example usage

import torch
from transformers import AutoModel, AutoTokenizer #, BertweetTokenizer

bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
#tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base")

# INPUT TWEET IS ALREADY NORMALIZED!
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"

input_ids = torch.tensor([tokenizer.encode(line)])

with torch.no_grad():
    features = bertweet(input_ids)  # Models outputs are now tuples

Normalize raw input Tweets

Before applying fastBPE to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using TweetTokenizer from the NLTK toolkit and used the emoji package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens @USER and HTTPURL, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets.

import torch
from transformers import BertweetTokenizer

# Load the BertweetTokenizer with a normalization mode if the input Tweet is raw
tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)

# BERTweet's tokenizer can be also loaded in the "Auto" mode
# from transformers import AutoTokenizer
# tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)

line = "SC has first two presumptive cases of coronavirus, DHEC confirms https://postandcourier.com/health/covid19/sc-has-first-two-presumptive-cases-of-coronavirus-dhec-confirms/article_bddfe4ae-5fd3-11ea-9ce4-5f495366cee6.html?utm_medium=social&utm_source=twitter&utm_campaign=user-share… via @postandcourier"

input_ids = torch.tensor([tokenizer.encode(line)])