Skip to content

hotdogisme/MedChain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MedChain: Bridging the Gap Between LLM Agents and Clinical Practice through Interactive Sequential Benchmarking

🤖 Homepage | 🤗 Dataset | 📖 arXiv

Jie Liu1* Wenxuan Wang1* Zizhan Ma2* Guolin Huang1 Yihang Su3 Kao-Jung Chang4 Wenting Chen1✉Haoliang Li1✉Linlin Shen3✉Michael Lyu2✉

1City University of Hong Kong  2Chinese University of Hong Kong  3Shenzhen University  

4National Yang Ming Chiao Tung University   5Taipei Veterans General Hospital  

This repository is the official implementation of the paper MedChain: Bridging the Gap Between LLM Agents and Clinical Practice through Interactive Sequential Benchmarking.

🚀Overview

In this paper, we introduce MedChain, a novel benchmark designed to bridge the gap between Large Language Model (LLM) agents and real-world clinical decision-making (CDM). Unlike existing medical benchmarks that focus on isolated tasks, MedChain emphasizes three core features of clinical practice: personalization, interactivity, and sequentiality.

MedChain comprises 12,163 rigorously validated clinical cases spanning 19 medical specialties and 156 sub-categories, including 7,338 medical images with reports. Each case progresses through five sequential stages: 1️⃣ Specialty Referral 2️⃣ History-taking 3️⃣ Examination 4️⃣ Diagnosis 5️⃣ Treatment

To address the challenges of MedChain, the authors propose MedChain-Agent, a multi-agent framework integrating:

  • Three specialized agents (General, Summarizing, Feedback) for collaborative decision-making.
  • MedCase-RAG, a retrieval-augmented module that dynamically expands a structured medical case database (12D feature vectors) for context-aware reasoning.

Key Results:

  • MedChain-Agent outperforms state-of-the-art models (e.g., GPT-4o, Claude-3.5) with an average score of 0.5269 across tasks, showcasing superior adaptability in sequential CDM.
  • Ablation studies confirm the critical roles of feedback mechanisms and MedCase-RAG .
  • The benchmark exposes limitations in existing LLMs, with single-agent models scoring ≤0.4327 due to error propagation in sequential stages.

MedChain sets a new standard for evaluating AI in clinical workflows, highlighting the need for frameworks that mirror real-world complexity while enabling reliable, patient-centric decision-making. The dataset and code will be released publicly to foster progress in medical AI.

overview

overview

overview

📦Code

  1. You can find the workflow code for the five core tasks of MedChain in the task_framework, and they can be individually tested.

    cd task_framework
    Specialty Referralpython task1_triage.py
    History-takingpython task2_interrogation.py
    Examinationpython task3_image.py
    Diagnosispython task4_diagnosis.py
    Treatmentpython task5_treatment.py
  2. You can locate the core workflow code for MedChain-Agent, MedCase-RAG, and the feedback mechanism in the main.py file, and run tests to evaluate MedChain-Agent.

    python main.py
  3. Below are the comparative and ablation study results for MedChain-Agent.

comparison

comparison

For More Details, please see our paper.

🔍 Insights

  1. Sequential clinical decision-making exposes critical gaps in current AI systems, with single-agent models achieving only 43.27% average accuracy (Claude-3.5), while MedChain-Agent improves performance to 52.69% through multi-agent collaboration and error mitigation.
  2. Structured medical knowledge retrieval (MedCase-RAG) drives significant performance gains, contributing a 8.11% improvement in clinical task accuracy by enabling dynamic case matching through 12-dimensional feature vectors.
  3. Iterative feedback mechanisms are pivotal for clinical reasoning, reducing error propagation and boosting average scores by 3.73% through continuous refinement across sequential stages.

🎈Acknowledgements

Weare particularly indebted to the administrators of the iiyi website for their generosity in allowing us to utilize their data for our research purposes. We would like to acknowledge the assistance provided by Claude-3.5 in proofreading our manuscript for grammatical accuracy and in facilitating the creation of LaTeX tables.

📜Citation

If you find this work helpful for your project,please consider citing our paper.

About

Bridging the Gap Between LLM Agents and Clinical Practice through Interactive Sequential Benchmarking

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages