TY - JOUR
T1 - Explaining Machine Learning Predictions of Decision Support Systems in Healthcare
AU - Erdeniz, Seda Polat
AU - Veeranki, Sai
AU - Schrempf, Michael
AU - Jauk, Stefanie
AU - Tran, Thi Ngoc Trang
AU - Felfernig, Alexander
AU - Kramer, Diether
AU - Leodolter, Werner
N1 - Publisher Copyright:
© 2022 The Author(s), published by De Gruyter.
PY - 2022/8/1
Y1 - 2022/8/1
N2 - Artificial Intelligence (AI) methods, which are often based on Machine Learning (ML) algorithms, are also applied in the healthcare domain to provide predictions to physicians and patients based on electronic health records (EHRs), such as history of laboratory values, applied procedures and diagnoses. The question about these predictions “Why Should I Trust You?” encapsulates the issue with ML black boxes. Therefore, explaining the reasons for these ML predictions to physicians and patients is crucial to allow them to decide whether the prediction is applicable or not. In this paper, we explained and evaluated two prediction explanation methods for healthcare professionals (physicians and nurses). We compared two model-agnostic explanation methods based on global feature importance and local feature importance. We evaluated the user trust and reliance (UTR) for the explanation results of each method in a user study based on real patients’ electronic health records (EHR) and the feedback of healthcare professionals. Based on the user study, we observed that both methods have strengths and weaknesses according to the patients’ data, especially based on the data size of the patient. When the amount of data is small, global feature importance is enough to use. However, when the patient’s data size is big, using a local feature importance method makes more sense. As future work, we will develop a hybrid explanation method (by combining these methods automatically with a smart setting) to obtain higher and more stable performance results in terms of user trust and reliance.
AB - Artificial Intelligence (AI) methods, which are often based on Machine Learning (ML) algorithms, are also applied in the healthcare domain to provide predictions to physicians and patients based on electronic health records (EHRs), such as history of laboratory values, applied procedures and diagnoses. The question about these predictions “Why Should I Trust You?” encapsulates the issue with ML black boxes. Therefore, explaining the reasons for these ML predictions to physicians and patients is crucial to allow them to decide whether the prediction is applicable or not. In this paper, we explained and evaluated two prediction explanation methods for healthcare professionals (physicians and nurses). We compared two model-agnostic explanation methods based on global feature importance and local feature importance. We evaluated the user trust and reliance (UTR) for the explanation results of each method in a user study based on real patients’ electronic health records (EHR) and the feedback of healthcare professionals. Based on the user study, we observed that both methods have strengths and weaknesses according to the patients’ data, especially based on the data size of the patient. When the amount of data is small, global feature importance is enough to use. However, when the patient’s data size is big, using a local feature importance method makes more sense. As future work, we will develop a hybrid explanation method (by combining these methods automatically with a smart setting) to obtain higher and more stable performance results in terms of user trust and reliance.
KW - Artificial Intelligence
KW - Decision Support Systems
KW - Explainable AI
KW - Healthcare
UR - http://www.scopus.com/inward/record.url?scp=85137662205&partnerID=8YFLogxK
U2 - 10.1515/cdbme-2022-1031
DO - 10.1515/cdbme-2022-1031
M3 - Article
AN - SCOPUS:85137662205
VL - 8
SP - 117
EP - 120
JO - Current Directions in Biomedical Engineering
JF - Current Directions in Biomedical Engineering
SN - 2364-5504
IS - 2
ER -