Mobile Menu

AI explainability in healthcare

AI explainability is one of the most discussed topics when it comes to applying AI in healthcare. Here, we summarise a recent article, published in BMC Medical Informatics and Decision Making, that provided a comprehensive assessment of the role of explainability in medical AI and the challenges to its adoption into clinical practice.

AI explainability

Artificial intelligence (AI) promises to alleviate current challenges associated with increasing healthcare costs. In a healthcare setting, AI comes in the form of clinical decision support systems (CDSS). In other words, systems that help clinicians with diagnosis and treatment decisions. Despite its potential, AI is not a universal solution. Some of the current challenges relate to the technical aspects of AI, while other relate to legal, medical and patient perspectives.

One of the most prominent concerns, particularly in relation to a clinical setting, is explainability. Some consider explainability as a characteristic of an AI-driven system which allows a person to reconstruct why an AI algorithm came up with its predictions. While research has indicated that AI algorithms can outperform humans in certain analytical tasks (e.g. pattern recognition in medical imaging), lack of explainability has been criticised within the medical domain. Without thorough consideration of the role of explainability in medical AI, these technologies may skip key ethical and professional principles. This in turn could cause considerable harm. 

Perspectives

The technological perspective

Explainability can either be an inherent characteristic of an algorithm or can be approximated by other methods. The latter is particularly important for “black-box models”, such as artificial neural network models. While inherent explainability has a crucial advantage other these methods, there still exists a trade-off between performance and explainability, which is a big challenge for developers. Nonetheless, explainability methods allow developers to check their models beyond mere performance and identify errors before they go into clinical validation. This saves time and development costs.

The legal perspective

The authors noted that from a legal perspective the question arises if and, if yes, to what extent explainability in AI is legally required. In order to exploit the opportunities of AI, the acquisition, storage, transfer, processing and analysis of data will have to comply with all laws, regulations and further legal requirements. From a legal point of view, the team noted three core fields for explainability: informed consent, certification and approval as medical devices, and liability. Introducing AI technologies into healthcare presents significant legal implications. Therefore, the conflict between innovation and regulations needs careful orchestration.

The medical perspective

The first consideration is what distinguishes AI-based clinical decision support from established diagnostic tools. Clinical validation is currently the first widely discussed requirement for a medical AI system. A particularly important source of error in AI systems is AI bias. Therefore, it is important that the data used for training fully represents the population. Explainability enables the resolution of disagreement between an AI system and human experts. The team noted that this will most likely succeed in cases of AI bias rather than cases of random error. Explainability may be a key driver for the uptake of AI-driven CDSS in clinical practice, as trust in these systems has not yet been established.

The patient perspective

The question arises of whether the use of AI-powered decisions aid with the inherent values of patient-centred care. Patient-centred care considers patients as active partners in the care process. A key component in this is sharing in decisions. Explainability can provide clinicians and patients with a personalised conversation aid that is based on the patient’s individual characteristics and risk factors.

Conclusion

Explainability is a multifaceted concept that has far-reaching implications for various stakeholders. Medical AI poses several challenges for developers, medical professionals and legislators. This study emphasises the necessity of explainability to address these challenges and ensure ethical and legal implementation of AI. Explainability can help to ensure that patients remain at the centre of care. It can also allow patients with their clinicians to make informed and autonomous decisions about their health. Omitting explainability in clinical decision support systems may threaten the core ethical values in medicine.

Image credit: By freepik – www.freepik.com


More on these topics

AI / Explainable AI / Healthcare

Share this article