March 13, 2026
The expectation game: Why AI promises in healthcare matter more than you think
This blog is produced as a part of the Reshaping Work Fellowship Programme. The opinions and views expressed in this publication are those of the author. They do not purport to reflect the opinions or views of Reshaping Work or organisations that have supported the programme.
Expectations are not separate from the real world. They shape investment, they shape how professionals prepare, and they shape whether technologies ever actually reach patients. The gap between what AI promises and what it delivers isn't just a PR trick. It's a pattern that makes healthcare innovation harder than it needs to be. Solutions requires honesty about the present and thoughtfulness about the future.
In my previous piece, I argued that AI in healthcare doesn’t replace work but it reshape it. I explored how this process is often messier, slower, and less transformative than you would think. Healthcare is a particularly stubborn system, and for good reason. When lives are on the line, caution is a feature, not a bug. Paraphrasing an interviewee: there's a reason the Hippocratic Oath starts with "first, do no harm."
After years of researching AI in hospitals, I still see little mature AI genuinely reshaping medical work. The research literature largely agrees with that assessment: AI adoption in hospitals is still limited (here, here, here, and here). And yet, if you look at investment flows, government strategies (from the Netherlands to Norway), WHO reports, and industry whitepapers, AI seems to be everywhere.
You might call this a classic hype bubble. But I want to draw your attention to something more specific: the difficult position companies find themselves in when they need to promise enough, while still being realistic.
The balancing act
"Promissory organisations", companies whose product is a convincing vision of the future, are nothing new. In my (upcoming) research, I've noticed a two-sided pattern: generating excitement with big claims, and then scaling back when reality sets in. A study by Watson & Wozniak-O'Connor captures this tension beautifully. They found that healthcare algorithms are promised to change everything, yet they will also be frictionless and respectful of professional autonomy.
Take vocal biomarkers, i.e. voice features that promise to reveal health issues. Companies in this space claim more efficiency, reliability, and accuracy than existing methods. The phrase "voice is the new blood" captures the ambition: your voice becomes an objective health indicator. But behind the scenes, many of these same companies align themselves tightly with existing clinical standards. They train their algorithms on current diagnostic categories; the fuzzy, contested DSM classifications [1] used in mental health. Not because those categories are ideal, but because that's what regulators, insurers, and hospitals understand. Developers say they want to transform practice while designing to fit in the system.
Why the gap actually matters
The pressure to overpromise comes from structural causes. Healthcare AI startups, which characterise the field, face what investors call the "valley of death": a multi-year, capital-intensive journey from prototype to clinical deployment. And because investments in this sector are often driven by expectations rather than proven value, the imagined future of a technology can itself generate momentum – at least for a while. The problem is that the bigger the promise, the deeper the valley when reality doesn't match.
We saw this recently with the bankruptcy of Kintsugi, a widely-celebrated vocal biomarker company. Its CEO explicitly stated that building AI for healthcare is financially unsustainable for startups. But the consequences go beyond failed startups. When clinical-grade AI proves too difficult to commercialise, developers often pivot to consumer health apps, which face fewer regulatory requirements. This shift means less rigorous, less evidence-based technology which was originally intended for clinical use to reach a much broader public, often without the same safeguards.
On top of that, promises already change how people work, even before a technology arrives. Carboni's research shows that healthcare professionals begin anticipating and adapting to innovations well before those innovations are in use. They adjust workflows, expectations, and mental models based on what they've been told is coming. And Stevens found that professionals flexibly imagine a technology to mould to their own needs. The cold shower comes when the real product arrives and can't deliver on that imagined flexibility.
Attempted recommendations
In the absence of a magic fix, three areas need improvement. First, investors and businesses could take a longer-term view. The question isn't how big the promise is, but whether the technology realistically fits how the system works. Working with the system, rather than trying to leapfrog it, might not make for exciting pitch decks, but it might sustain companies through the valley of death. For business: Expectations, too, can backfire.
Second, more hands-on education for professionals matters. Research shows that direct experience with technology helps form realistic expectations.
Third, academics and researchers could do a better job of engaging constructively with expectations. Being critically optimistic is hard, but useful. Researchers could offer realistic and empirical accounts to governments, institutions, and the public to get a more grounded, honest perspective on AI.
This article was first published on Reshaping Work Insights on March 10, 2026.
© Image: Edit of Elena Koycheva's (@lenneek) photo at Unsplash.com
More results /
Je medisch dossier inladen in nieuwe functie ChatGPT? Denk 10.000 keer na
By Natali Helberger • January 19, 2026
By Roel Dobbe • November 24, 2025
By Roel Dobbe • November 12, 2025
Combatting financial crime with AI at the crossroads of the revised EU AML/CFT regime and the AI Act
By Magdalena Brewczyńska • January 16, 2026
By Sabrina Kutscher • July 02, 2025
By Natali Helberger • March 06, 2025
By Martijn Logtenberg • November 20, 2025
By Maurits Kaptein • June 06, 2025
By Marilù Miotto • March 02, 2026
By Agustin Ferrari Braun • January 29, 2026