BioWordVec & BioSentVec:
pre-trained embeddings for biomedical words and sentences
Table of contents
- Text corpora
- BioWordVec: biomedical word embeddings with fastText
- BioSentVec: biomedical sentence embeddings with sent2vec
We created biomedical word and sentence embeddings using PubMed and the clinical notes from MIMIC-III Clinical Database. Both PubMed and MIMIC-III texts were split and tokenized using NLTK. We also lowercased all the words. The statistics of the two corpora are shown below.
|MIMIC III Clinical notes||2,083,180||41,674,775||539,006,967|
BioWordVec: biomedical word embeddings with fastText
We applied fastText to compute 200-dimensional word embeddings. We set the window size to be 20, learning rate 0.05, sampling threshold 1e-4, and negative examples 10. Both the word vectors and the model with hyperparameters are available for download below. The model file can be used to compute word vectors that are not in the dictionary (i.e. out-of-vocabulary terms).
- BioWordVec vector 13GB (200dim, trained on PubMed+MIMIC-III, word2vec bin format)
- BioWordVec model 26GB (200dim, trained on PubMed+MIMIC-III)
BioSentVec : biomedical sentence embeddings with sent2vec
We applied sent2vec to compute the 700-dimensional sentence embeddings. We used the bigram model and set window size to be 20 and negative examples 10.
- BioSentVec model 21GB (700dim, trained on PubMed+MIMIC-III)
|Averaged word embeddings||0.694||0.747|
|Universal Sentence Encoder||0.345||0.714|
|BioSentVec (PubMed + MIMIC-III)||0.795||0.767|
|Deep learning + Averaged word embeddings||0.703||0.784|
|Deep learning + Universal Sentence Encoder||0.401||0.774|
|Deep learning + BioSentVec (PubMed)||0.824||0.819|
|Deep learning + BioSentVec (MIMIC-III)||0.353||0.805|
|Deep learning + BioSentVec (PubMed + MIMIC-III)||0.848||0.836|
You can find answers to frequently asked questions on our Wiki.
When using some of our pre-trained models for your application, please cite the following paper:
- Chen Q, Peng Y, Lu Z. BioSentVec: creating sentence embeddings for biomedical texts. 2018. arXiv:1810.09302.
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of Medicine. We are grateful to the authors of fastText, sent2vec, MayoSRS, UMNSRS, BIOSSES, and MedSTS for making their software and data publicly available.