Skip to content
This repository has been archived by the owner on Mar 27, 2022. It is now read-only.

BioBERT: a pre-trained biomedical language representation model for biomedical text mining #12

Open
tm4roon opened this issue Oct 12, 2019 · 0 comments
Labels
Language Understanding Language Understanding Word Representation Word Representation
Projects

Comments

@tm4roon
Copy link
Owner

tm4roon commented Oct 12, 2019

BioBERT: a pre-trained biomedical language representation model for biomedical text mining

Wikipedia, BooksCorpusで学習したBERTをさらに、医療関係の大規模なテキスト(PubMed, PMC)で学習したモデルBioBERTを提案。医療ドメインのNER, Relation extraction, Question answeringで、BERTを上回る性能を達成し、医療ドメインテキストによる事前学習の有用性を示した。

文献情報

  • 著者: Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So and Jaewoo Kang
  • リンク: https://arxiv.org/abs/1901.08746
  • 学会: arXiv2019
@tm4roon tm4roon added Language Understanding Language Understanding Word Representation Word Representation labels Oct 12, 2019
@tm4roon tm4roon added this to 2019 in arXiv Oct 12, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Language Understanding Language Understanding Word Representation Word Representation
Projects
arXiv
2019
Development

No branches or pull requests

1 participant