Skip to content

atenanaz/ChatGPT-HealthPrompt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 

Repository files navigation

ChatGPT-HealthPrompt

"ChatGPT-HealthPrompt" (an acronym for "Prompt Engineering for Healthcare Decision Making using ChatGPT as the Large Language Model") serves as an evaluative framework to analyze the efficacy and potential hazards of OpenAI's GPT-3 model within the field of healthcare decision-making, especially in diagnostic contexts.

This paper has been accepted to the workshop of XI-ML at ECAI 2023 (http://www.cslab.cc/xi-ml-2023/).

Link to the paper: https://arxiv.org/abs/2308.09731

You can access the Colab file to run the system via the following link: [Link]. Please note that you will need to provide your GitHub token and OpenAI API key to execute the code automatically. If you have any further questions, feel free to contact Athena at fatemeh.nazary@poliba.it.

@article{DBLP:journals/corr/abs-2308-09731,
  author       = {Fatemeh Nazary and
                  Yashar Deldjoo and
                  Tommaso Di Noia},
  title        = {ChatGPT-HealthPrompt. Harnessing the Power of {XAI} in Prompt-Based
                  Healthcare Decision Support using ChatGPT},
  journal      = {CoRR},
  volume       = {abs/2308.09731},
  year         = {2023},
  url          = {https://doi.org/10.48550/arXiv.2308.09731},
  doi          = {10.48550/arXiv.2308.09731},
  eprinttype    = {arXiv}
}



📊 Research Summary: ChatGPT in Healthcare Decision-Making

🔍 Key Insights:

  1. 📈 ChatGPT initially lags in zero-shot scenarios but gains ground in few-shot scenarios.
  2. 🧠 Incorporating domain-specific knowledge, such as XGB predictions, substantially boosts performance.
  3. ⚠️ Higher rates of False Positives require careful model design and implementation.

🌟 Summary:

Our research dives deep into the role of OpenAI's ChatGPT in healthcare decision-making. While the model starts off trailing traditional ML approaches, it shows remarkable adaptability in few-shot learning scenarios. Domain-specific integration notably elevates its capabilities, at times outperforming classical ML models. However, caution is due—particularly due to increased rates of False Positives and variability in results.


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published