[ACL 2024 Findings] MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning
-
Updated
May 27, 2024 - Python
[ACL 2024 Findings] MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning
A large-scale (194k), Multiple-Choice Question Answering (MCQA) dataset designed to address realworld medical entrance exam questions.
A novel medical large language model family with 13/70B parameters, which have SOTA performances on various medical tasks
Medical Question-Answering datasets prepared for the TREC 2017 LiveQA challenge (Medical Task)
The First Generate-then-Read Framework for Multiple-Choice Question Answering in Medicine
The use of ChatGPT in the healthcare sector
A Columbia University capstone project focused on mitigating hallucinations in Medical Question Answering systems using Retrieval-Augmented Generation (RAG), ElasticSearch, and LLM-based validation.
In this we implement Question Answering task on Medical domain using DeepSeek-R1 Model
Persian Question Answering Dataset
Medquery is designed to help healthcare professionals access accurate, up-to-date medical information quickly and efficiently, and receive patient-specific, evidence-based guidance that supports informed decision-making.
Add a description, image, and links to the medical-question-answering topic page so that developers can more easily learn about it.
To associate your repository with the medical-question-answering topic, visit your repo's landing page and select "manage topics."