Authors: Harry Freitas da Cruz, Boris Pfahringer, Tom Martensen, Frederic Schneider, Alexander Meyer, Erwin Böttinger and Matthieu-P. Schapranow
Abstract
Notwithstanding advances in machine learning-based clinical prediction models, only few of such models are actually deployed in clinical contexts, among other reasons, for a lack of validation studies. In this paper, we present and discuss the validation results of a machine learning model for the prediction of acute kidney injury in cardiac surgery patients initially developed on the MIMIC-III dataset when applied to an external cohort of an American research hospital. To help account for the performance differences observed, we utilized interpretability methods based on feature importance, which allowed experts to scrutinize model behavior both at the global and local level, making it possible to gain further insights into why it did not behave as expected on the validation cohort. The knowledge gleaned upon derivation can be potentially useful to assist model update during validation for more generalizable and simpler models. We argue that interpretability methods should be considered by practitioners as a further tool to help explain performance differences and inform model update in validation studies.