CamemBERT-bio is a state-of-the-art french biomedical language model built using continual-pretraining from camembert-base. It was trained on a french public biomedical corpus of 413M words containing scientific documents, drug leaflets and clinical cases extracted from theses and articles. It shows 2.54 points of F1 score improvement on average on 5 different biomedical named entity recognition tasks compared to camembert-base.
Clinical data in hospitals are increasingly accessible for research through clinical data warehouses, however these documents are unstructured. It is therefore necessary to extract information from medical reports to conduct clinical studies. Transfer learning with BERT-like models such as CamemBERT has allowed major advances, especially for named entity recognition. However, these models are trained for plain language and are less efficient on biomedical data. This is why we propose a new french public biomedical dataset on which we have continued the pre-training of CamemBERT. Thus, we introduce a first version of CamemBERT-bio, a specialized public model for the french biomedical domain that shows 2.54 points of F1 score improvement on average on different biomedical named entity recognition tasks.
- pre-print: https://hal.science/hal-04085419
- Developed by: Rian Touchent, Eric Villemonte de La Clergerie
- Logo by: Alix Chagué
- License: MIT
Model available at: https://hf.co/almanach/camembert-bio-base
evaluations scripts coming soon