** This is work in progress **
BETO: Spanish BERT
BETO is a BERT model trained on a big Spanish corpus. BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with Multilingual BERT as well as other (not BERT-based) models.
|BETO uncased||tensorflow weights||pytorch weights||vocab config|
|BETO cased||tensorflow weights||pytorch weights||vocab config|
All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece.
The following table shows some BETO results in the Spanish version of every task. We compare BETO (cased and uncased) with the Best Multilingual BERT results that we found in the literature (as of October 2019) highlighting the results whenever BETO ourperform Multilingual BERT for the Spanish task. The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods). References for all methods can be found here.
|Task||BETO-cased||BETO-uncased||Best Multilingual BERT||Other results|
|XNLI||-----||80.15||78.50 ||80.80 , 77.80 , 73.15 |
|POS||-----||98.44||97.10 ||98.91 , 96.71 |
|NER-C||-----||81.70||87.38 ||87.18 |
Example of use
We thank Adereso for kindly providing support for traininig BETO-uncased, and the Millennium Institute for Foundational Research on Data that provided support for training BETO-cased.
-  Original Multilingual BERT
-  Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"
-  Multilingual BERT on "How Multilingual is Multilingual BERT?"
-  LASER
-  XLM (MLM+TLM)
-  UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"
-  Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"
-  Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"