Who could let another opportunity to go to Florida pass by? Apparently not me. Don’t ask, I’m a glutton for long flights from LA to FL, I suppose. But this one was too good to ignore. The CHIME Innovation Summit, hosted at Baptist Health in Jacksonville, was a veritable who’s-who of IT leaders from healthcare, all coming together to discuss AI.
There were some fantastic success stories talked about, and a ton of questions about where we go from here.
Three Key Discussions:
Cross-System Partnership is Key
AI isn’t an IT initiative. It’s a new way of thinking, and it has implications across the health system. AI is an opportunity to re-think how work is done, and the work is primarily done by non-IT people – so aligning on where the focus should be between IT leaders and the business is the most important factor in prioritizing.
Another way to look at it: no matter what label we put on them, digital projects are more about change management than they are on technology. Making sure that change is aligned to shared goals and outcomes, communication is in place, and people are motivated and trained to embrace the change, will have more impact than technology selection alone.
ROI Needs to be ‘Real’
Bold claims of ROI or people reduction need to be substantiated with real numbers or actions. Saying that a robot can fetch prescriptions or ice chips so that a nurse can focus on more important tasks sounds great, and has a great focus on nursing efficiency – but if the robot price is prohibitive, it’s still a questionable idea.
One example discussed was a vendor who made huge claims about staff reductions, but when pushed to commit to the cost avoidance and contract based on achieving it, they quickly backed off. If you can’t remove wages, then you haven’t really saved any money.
On the flip side, there is a lot of ‘optimism’ about the cost-saving potential of AI. One estimate was that 30% of hospital staff could eventually be reduced – all from non-clinical staff. Whether that’s realizable is yet to be determined, but there’s a lot of focus on the wages line item.
Fortune Favors the Bold – and the Large
Nobody has AI ‘figured out’ yet. The health systems that achieve results are taking risks, moving quickly, and are not afraid to fail quickly. We saw some amazing initiatives being accomplished by our hosts, but they have clearly gone through substantial organizational preparedness to get here and have impressively strong alignment across the organization.
There is still a lot of risk to address and try to mitigate. The introduction of multiple parties, different hosts for data, and changing environments makes managing digital interventions complicated.
It will be interesting to see how this rolls out into the mid-sized and smaller systems who struggle with the resources, both financially and from a staffing perspective. When you move from ‘fail fast’ to ‘can’t afford to fail at all,’ digital innovation – including AI – takes on a different lens.
Bonus Topic:
Action v.s. Alignment?
OK, I threw this one in. But I’ve had a personal connection to exactly this situation recently.
Recently, a family member went to get routine mammogram screenings. She opted for the new ‘AI’ version.
It turns out the AI ‘found’ something the radiologist didn’t identify and the recommendation was to follow-up with an ultrasound. The facility didn’t have ultrasound capacity available for WEEKS. So now imagine you’ve just told a woman she might have cancer, but that she has to wait for several weeks to take the next step. (Note: I didn’t let that happen and forced the issue and she went somewhere else.)
If you’re going to introduce technology that might create more follow up appointments, does your system have the capacity for it? The alternative is to risk leakage, and/or have a very anxious patient with a miserable experience.
In the end, the ultrasound was negative, so the stress was for nothing.
I’ve told this story a few times now and recently I got told of an alternate experience. A woman visiting the same health system had a radiologist see something concerning, but the AI said it was nothing.
So does it all even out? Turns out, nope. We aren’t willing to accept the liability of the AI being wrong, so the same ultrasound was ordered, with the same backlog to availability, and the same stress created.
AI has the potential to do some amazing things in terms of care delivery, cost avoidance, and outcomes…but we need to take a full systems view of implementation. And if we’re going to experiment, our patients might not be the best first place to do it.
Everyone has an opinion on AI – what’s yours?