Why we do need Explainable AI for Healthcare

Open Access
Authors
Publication date 01-07-2022
Edition v1
Number of pages 11
Publisher ArXiv
Organisations
  • Faculty of Economics and Business (FEB) - Amsterdam Business School Research Institute (ABS-RI)
  • Faculty of Economics and Business (FEB)
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
The recent spike in certified Artificial Intelligence (AI) tools for healthcare has renewed the debate around adoption of this technology. One thread of such debate concerns Explainable AI and its promise to render AI devices more transparent and trustworthy. A few voices active in the medical AI space have expressed concerns on the reliability of Explainable AI techniques, questioning their use and inclusion in guidelines and standards. Revisiting such criticisms, this article offers a balanced and comprehensive perspective on the utility of Explainable AI, focusing on the specificity of clinical applications of AI and placing them in the context of healthcare interventions. Against its detractors and despite valid concerns, we argue that the Explainable AI research program is still central to human-machine interaction and ultimately our main tool against loss of control, a danger that cannot be prevented by rigorous clinical validation alone.
Document type Preprint
Language English
Published at https://doi.org/10.48550/arXiv.2206.15363
Downloads
2206.15363 (Submitted manuscript)
Permalink to this page
Back