You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Large Language Models (LLMs) have showcased remarkable capabilities innatural language understanding in various domains. These models can usuallybehave well on daily dialog, or question answering scenarios, however, in areasthat value precision, for example, in medical applications, they often exhibitunsatisfactory performance due to a lack of domain-specific knowledge. In thisreport, we introduce PMC-LLaMA, an open-source language model that is acquiredby fine-tuning an open-source language model on a total of 4.8 millionbiomedical academic papers for further injecting medical knowledge, enhancingits capability in medical domain. Our preliminary evaluations are conducted onthree biomedical QA datasets, including PubMedQA, MedMCQA, and USMLE, showingthat the our model after finetuning, i.e., PMC-LLaMA, demonstrates betterunderstanding of biomedical domain-specific concepts, thus achieving highperformance on QA benchmarks. The model and codes, along with an online demo,are publicly available.
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: