AI in Healthcare: Between Hype and Healing
- Anupam Kundu
- 10 minutes ago
- 6 min read
It started at the end of January this year. One careless turn on the ski slopes in the Pyrenees and soon I was staring at an MRI that looked like a storm; a chaotic swirl of bone, fluid, and mystery under my left knee.
Out of curiosity, I asked ChatGPT to interpret the image. It tried but the output left me with more questions than answers. The output wasn’t wrong, but it wasn’t helpful either. It told me something was “abnormal,” the kind of half-answer that makes you more anxious, not less.
Then came the MRI specialist’s report - precise, methodical, line by line, in French. I translated it, studied it, and suddenly, I understood my own body again. I could speak to the orthopedic surgeon using that insight, not as a passive patient, but as an informed participant in my own care.
That’s when it struck me: this is the real promise of AI in healthcare. not only to diagnose us, but to help us understand what is being diagnosed; not to replace trained professionals, but to make their language more accessible. The irony? Most of the time, it still doesn’t.
Healthcare’s eternal paradox
Medicine lives inside a paradox. It carries the most noble purpose - saving lives - yet it operates within crushing constraints: rising costs, regulatory pressure, and the constant demand from patients for clarity that the discipline itself can rarely provide.
Into this troubled ecosystem, we’ve dropped AI.
The promises of AI are irresistible: faster diagnoses, personalized treatments, predictive alerts that save lives. But the reality is uneven. Some of it truly works - right now, in real hospitals. Much of it doesn’t. And the winners aren’t always the ones we expect.
Where it works
Step into a modern radiology clinic and you’ll probably find AI already at work.
It doesn’t replace the radiologist; it assists them. Algorithms highlight subtle shadows in lung scans, flag suspicious lesions in mammograms, or point out edemas in MRIs that a tired human eye might miss after hours of reading. Think of it as a tireless colleague, quietly catching what fatigue might overlook.
In intensive care units, predictive models monitor patient vitals in real time, warning nurses of sepsis or sudden decline minutes before it happens. The nurse still makes the call - but that alert provides the extra lead time - possibly making the difference between a close call and a tragedy.
Beyond the clinical frontlines, many hospitals are now using AI to schedule surgeries, balance staff shifts, and manage beds more efficiently. Not glamorous — but if you’ve ever spent ten hours waiting in an ER, you know how much operational efficiency matters.
Where illusions persist
Then come the mirages. A few years ago, consumer apps promised to diagnose skin rashes from a phone photo - “dermatology for everyone.” Most vanished quietly after regulators and doctors lost faith in their accuracy.
But in the UK, a new generation of regulated tools is rewriting the story. The NHS is testing DERM, an AI system from Skin Analytics that helps specialists prioritize high-risk skin lesions. The difference is discipline: every result passes through clinical workflows, with human oversight under NICE’s “use-and-learn” framework.
It’s no longer “AI replaces the dermatologist.” It’s AI helps the dermatologist see the right patient sooner.
And that dream of personalized medicine - bespoke treatments built from your genome - remains largely aspirational. We’re getting there, but it’s not yet a daily clinical reality.
Even Big Tech has learned that “almost ready” isn’t ready enough.
In early 2025, Microsoft researchers reported that their AI Diagnostic Orchestrator (MAI-DxO) could outperform physicians in controlled test cases from The New England Journal of Medicine. Yet instead of declaring victory, they hit pause — insisting that no model, however accurate, should enter clinics without explainability, liability clarity, and regulatory oversight.
That restraint is what real maturity looks like: power balanced by humility.
The investment momentum
Despite the setbacks, the money hasn’t stopped.💰 The global AI-in-healthcare market now hovers around $25 billion, projected to reach $180 billion by 2030 - roughly 35–40% annual growth.
Why the optimism? Because the returns are becoming visible:
🩻 Imaging and diagnostics - faster reads, fewer misses.
⚕️ Risk prediction and monitoring - earlier interventions, lower mortality.
🏥 Operational efficiency - shorter stays, better staff utilization.
🧾 Administrative automation - less paperwork, more patient time.
💊 Drug discovery - faster molecule design and safer trial simulations.
In 2024 alone, investors poured over $5 billion into AI-driven healthcare startups. Big Tech and Big Pharma are doubling down on the evidence.
Pharma companies are now using AI not just to find molecules but to predict toxicity and safety long before human trials. Regulators, such as the FDA, have encouraged this approach, aiming to reduce animal testing and accelerate the time-to-market. The message is clear: innovate fast, but prove safety faster.
Europe’s quiet advantage
While the U.S. sprints ahead, Europe is building differently - slower, but sturdier.
The European Health Data Space (EHDS), which came into force in March 2025, establishes a framework for the secure and interoperable sharing of health data across EU countries.
Over the next few years, its rules for the secondary use of data, such as research and innovation, will be introduced in stages, with most becoming fully operational by the end of the decade. Once in place, these provisions will enable access to anonymized health data for research under strict ethical and security safeguards.
Layered on top is the EU AI Act (2024), the world’s first horizontal AI law. For high-risk domains like healthcare, it enforces transparency, human oversight, risk management, and traceability.
And when AI goes wrong? New “no-fault liability” rules are being explored — ensuring patients receive compensation without endless courtroom fights.
Europe’s approach may be slower, but it’s building something the U.S. still struggles with: trust. And in healthcare, everyone knows that trust is the ultimate currency.
Barriers to adoption: Trust and Fit
The biggest obstacles to AI in healthcare aren’t technical - they’re human.
Bias in training data and poor alignment with clinical workflows both erode trust. When algorithms under-represent women, older adults, or people of color, their recommendations become unreliable. When tools interrupt a clinician’s rhythm with alerts and dashboards that don’t fit real practice, even accurate models are abandoned.
True adoption depends on trust, and trust comes from fairness, transparency, and seamless integration into how care is actually delivered.
The AI Pathway Lens: Winning without the hype, with practice
From the AI Pathway Framework — Vision → Strategy → Enablers — here’s how organizations can turn promise into practice:
Start small, aim high. Scan well end-to-end workflows and then pick three to five use cases that matter, measurable in cost, time, or quality - and already backed by early evidence. Focus builds credibility.
Map the data. Know what you have - EHRs, PACS, claims, staffing data — and how to access it securely. No clarity, no project.
Design for safety and explainability. Align with the EU AI Act and MDR. Include human-in-the-loop checks, incident tracking, and versioned model registries.
Embed AI in real workflows. Integrate into existing systems — PACS, EHR inboxes, OR schedulers. Every extra click is a step toward failure.
Treat pilots as investments, not experiments. Define thresholds upfront: 15–25% faster reads, 5–10% throughput gains. If they’re met, scale. If not, stop, learn, restart.
Plan for feedback loops. Every model drifts. Continuous monitoring and retraining keep “once good” from turning into “outdated and unsafe.”
That’s the AI Pathway, the road from project to practice.

The Reality Check
So, is AI in healthcare working?
✅ Yes - in imaging, risk prediction, documentation, and operations.
❌ Not yet - in self-diagnosis, unsupervised treatment, or instant genomic personalization.
The truth lives between the extremes of hype and despair.AI isn’t a miracle cure or a dead end. It’s a slow, careful transformation, one validated use case at a time. And maybe, just maybe, that’s exactly the pace healthcare deserves.
According to Deloitte’s 2025 survey, over 80% of health-system executives now expect generative AI to materially reshape their operations within two years. The question isn’t if AI will work; it’s how responsibly it will be allowed to.
Technology matters. But governance, empathy, and patience matter more. The winners will be those who move slow enough to be trusted and fast enough to make a difference.