From 4a7ff9cee10d8ee25d9b4afdb7718e5e46a27d65 Mon Sep 17 00:00:00 2001 From: dartrevan Date: Sun, 8 Nov 2020 14:29:29 +0300 Subject: [PATCH] Update README.md --- model_cards/cimm-kzn/enrudr-bert/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/model_cards/cimm-kzn/enrudr-bert/README.md b/model_cards/cimm-kzn/enrudr-bert/README.md index 0ac53676e75c1..f4ec132c8c722 100644 --- a/model_cards/cimm-kzn/enrudr-bert/README.md +++ b/model_cards/cimm-kzn/enrudr-bert/README.md @@ -3,9 +3,9 @@ language: - ru - en --- -## RuDR-BERT +## EnRuDR-BERT -EnRuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews) and collecting of consumer comments on drug administration from [2]. Pre-training was based on the [original BERT code](https://github.com/google-research/bert) provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \ +EnRuDR-BERT - Multilingual, Cased, which pretrained on the raw part of the RuDReC corpus (1.4M reviews) and english collection of consumer comments on drug administration from [2]. Pre-training was based on the [original BERT code](https://github.com/google-research/bert) provided by Google. In particular, Multi-BERT was for used for initialization; vocabulary of Russian subtokens and parameters are the same as in Multi-BERT. Training details are described in our paper. \ link: https://yadi.sk/d/-PTn0xhk1PqvgQ