Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
doc
 
 
 
 
 
 
 
 

Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction Task

Understanding how black-box models make predictions, and what they see in the pandemic.

https://arxiv.org/abs/2010.02006

Slides Available Here

Introduction

Introduction

The pandemic is a race against time. We seek to answer the question, how can medical practitioners employ machine learning to win the race in the pandemic?

Instead of targeting at a high-accuracy black-box model that is difficult to trust and deploy, we use model interpretation that incorporates medical practitioners' prior knowledge to promptly reveal the most important indicators in early diagnosis, and thus win the race in the pandemic.

Understanding high-accuracy Black-box models

In this research, we try to understand why those black-box models can make correct predictions. Is it possible to let black-box models speak, telling us how they make predictions? Will medical practitioners benefit from these models?

【Correct Predictions】

Neural networks make a correct prediction because it thinks the patient is old and has a high CRP which indicates severe virus infection, and a high NTproBNP.

Gradient Boosted Trees makes a similar correct prediction because it thinks the patient has a high CRP and NTproBNP, even though the patient shows little symptoms ( = 0).

【Wrong Predictions】

Decision Trees unfortunately makes a wrong prediction, because it thinks even though the patient is having a fever (38.4), but the CRP and NTproBNP are not high enough to be severe.

Credits

The raw dataset comes from hospitals in China, including 92 patients who contracted COVID-19. Our Research Ethics Committee waived written informed consent for this retrospective study that evaluated de-identified data and involved no potential risk to patients. All of the data of patients have been anonymized before analysis.

@misc{wu2021interpretable,
      title={Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction Task}, 
      author={Han Wu and Wenjie Ruan and Jiangtao Wang and Dingchang Zheng and Bei Liu and Yayuan Gen and Shaolin Li and Jian Chen and Kunwei Li and Xiangfei Chai and Sumi Helal},
      year={2021},
      eprint={2010.02006},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

About

Interpretable Machine Learning for COVID-19

Topics

Resources

Releases

No releases published

Packages

No packages published