top of page

Why You Need to Know How Your Healthcare AI Actually Works


Image created by author © 2024 All Rights Reserved


You're planning to implement AI in your clinical setting to manage a specific condition, but do you actually know how the AI will achieve its intended purpose?


Healthcare is deploying AI at scale. The NHS has published plans and frameworks for the wider adoption and implementation of AI. This can have a hugely positive impact; AI can enhance efficiency, reduce provider workload, increase capacity, and improve patient outcomes. But for the benefits to be realised, the AI has to be designed and implemented with the right model—not just the right machine learning model, but the right operational and clinical model.


That’s where the problem lies. Many people do not understand how the AI does what it does and, therefore, what it is aiming to achieve. This makes implementing it in the right operational and clinical manner difficult.


So it’s important to consider some key questions and keep certain factors in mind.


What cases will the AI handle?


Take skin cancer as an example. “If you use this AI, it can handle 50% of your suspected skin cancer cases, lightening your workload and boosting your efficiency!” 


Sounds like an offer you can't refuse, but is it really? 


The claim is that AI will take care of half of your patients, in particular those that it thinks are unlikely to have cancer. You won’t need to see these cases again, thus reducing your workload.


But which cases will it deal with? Will it deal with the challenging or tricky patient cases you could use some help diagnosing? Probably not. 


More likely, it will deal with the simple stuff—the cases that are quick and easy to evaluate and probably make up less than 20% of your actual workload, despite accounting for 50% of your caseload.


While technically speaking, the technology still helps to achieve a reduction in workload, all of a sudden, the cost-benefit analysis is not quite as it seems.


If the AI cannot discharge patients, then who will?


You need to remember that, due to current AI and healthcare regulations, a 'machine' is not able to make its own clinical decision. It is there to act as ‘clinical decision support’ and help clinicians in their assessment.


AI alone cannot make the decision to discharge patients, especially with conditions such as cancer. This means that a human, i.e., a dermatologist in this example, must confirm each and every case that the AI identifies as unlikely to be skin cancer. But who decides which dermatologists provide the ‘second look’? 


But now, there's a lingering question: are hospitals paying for each patient the AI manages, or for each patient the second-look-dermatologist reviews? 


While this might seem like an irrelevant nuance, it leads to a broader question: are you paying for AI technology, or are you paying for the outsourcing of your non-complex clinical cases?


It may still be of value; outsourcing models to the private sector are well embedded in the NHS and, in many cases, are win-win for all parties. 


But you need to know what service you are paying for to ensure it achieves your required aims.

 

Is the AI truly cost-effective?


What if the AI suggests discharging a patient as they are deemed non-cancerous, but the dermatologist who is taking the 'second look’ isn't quite convinced?


What happens then? Does the patient end up coming back to the hospital for assessment? And if so, is the hospital dermatologist aware that both the AI and a dermatologist have previously evaluated this patient’s lesion, with differing opinions?


You also need to consider the cost. The hospital will still bear the cost of diagnosing and managing this patient. The hospital will pay for each patient the AI manages, but some (many?) cases end up back in their hands, for which they bear the financial burden of ongoing care.


Is it still reducing your workload? What is the cost-benefit analysis now?


What if it makes a mistake?


Let’s ask one last, and possibly most expensive, question: What happens if the AI and the ‘second look’ dermatologist both suggest discharging a patient, but later the patient is diagnosed with advanced melanoma and decides to sue? Who is liable? 


Presently, the liability will sit with the provider who outsourced the care to the AI technology, regardless of whether or not they’re aware of or have viewed the patient's case. 


So if you are the hospital provider, you may be liable for the decisions of a clinician that you neither know, nor have appointed, nor have ever had a direct relationship with.



As with any technology, AI can be extremely beneficial, but don't just take that as gospel for all AI tech. 


If you are going to be using AI, which will vary between different clinical conditions and their condition-specific technology, you have to understand how it will realise the benefits you need. How will it increase your capacity? How will it enhance your efficiency? How will it support patient care? To answer these questions, you need to understand not just the AI model but also the clinical and operational model within which to deploy it. Know exactly what you're paying for. Dig deep. Get down to the brass tacks. Scrutinise the data, and that’s including your own data when using the new AI model. 


There are plans to increase the deployment of AI across healthcare, and with good reason—technology can deal with greater volumes and provide superior accuracy at significantly quicker speeds than humans. 


As this deployment gathers pace, however, we need to start asking the right questions.


bottom of page