A recent Comment, published in npj Digital Medicine, discusses the importance of developing a ‘delivery science’ to allow for artificial intelligence (AI) to be safely and effectively introduced into healthcare.
How AI fits in a complex healthcare system
As AI has evolved, its potential applications have generated a lot of excitement in healthcare. The bulk of current AI methods use machine learning (ML) to develop prediction and classification models. However, efforts to integrate these models into real world settings is still in its infancy. For the delivery of AI to be scalable, we must deepen our thinking around how AI fits into the intricacies of healthcare.
Libraries of ML models are continuing to stack up, with claims of valuable uses for clinical implementation. However, the appropriate uses for these models and how they will be implemented remain unknown. The authors summarise this nicely:
“Current efforts to use AI in healthcare often begin with ‘I have a ML model that can accurately predict or classify X’, but then get stuck at ‘how do I use it and for whom?’”
Experts must understand the complexities of how they will deliver this into clinical use before building ML models. The authors emphasise that just focusing on the model’s ability to predict tasks does not sufficiently improve care.
A delivery science
The authors propose a delivery science for AI in healthcare that rests on the following principles:
- Healthcare is delivered in a complex system so AI must accommodate this.
- AI is not an end product but an enabling component of broader solutions.
- Solutions enabled by AI are complex systems of people, processes and technologies.
The task to implement AI in healthcare should not just be about deploying ML models but should be about designing the best possible care delivery system for a given problem. The ML model is just a component in that delivery system.
The authors draw on experience to highlight that it is important to combine data science with other disciplines such as process improvement, design thinking and implementation science. The Figure below provides a summary for the multidisciplinary process for embedding AI systems in healthcare. Understanding the complex system and what AI will look like in practice is important before delivering the model.
The authors note that questions will arise about who is responsible for implementing such delivery systems. They highlight that experts must pay additional attention in implementing quality controls for the models. Like clinical laboratory equipment, these models will require regular recalibration and tuning. Importantly, experts will need to appropriately communicate the characteristics of the models to clinical users.
It is clear that AI research needs to move away from development and into the real world. The delivery science of AI will need to address how these systems are designed, implemented and evaluated in order to understand how AI can improve healthcare.