Health / Opinion
November 20, 2025

Labour politics and why AI hasn't 'fixed' healthcare

There’s a popular belief that AI will fix everything: that it’s unstoppable, fair, and won’t change what’s important. In the Netherlands, former health minister Agema also fell for this belief when she wanted to spend €800 million on AI to solve healthcare’s problems like ageing populations, staff shortages, and overworked employees.

In this article, I want to show why the expectation of AI taking over healthcare is wrong and explain what really happens to healthcare workers when technology arrives: transformed and redistributed work.

Who runs the world? Data.

During my time researching in the hospital, I saw how important the Electronic Health Record has become. Doctors spent most of their time on computers and the data controls clinical conversations.

Originally, standardised patient records came about because hospitals wanted more control over quality and costs. But with AI needing and creating more data, this has intensified. More AI means more focus on collecting data.

Here’s an example: Electronic Health Records were supposed to help doctors with paperwork. But studies found doctors often passed this work to nurses instead. This has mixed effects. Nursing work that was invisible before now gets documented, but it takes time away from actually caring for patients. We can see how data collection can help and hurt a professional at the same time. The difference is that when technology is forced on you, it’s hard to make it work in your favour. For instance, a whole new job was created called “medical scribe” — often low-paid workers who write down what doctors do. This example shows the nature of labour politics: a doctor gets help while a nurse is expected to adapt.

The little man in the box

I’m trained as a data scientist. During my research, I program algorithms myself. I started excited, but soon faced a reality I’d forgotten: I have no idea what this data actually means. Figuring it out takes endless plotting, asking questions, googling, and guessing.

People usually don’t see all the human work behind technology. Kempeneer & Heylen found something political: salespeople and data scientists often can’t explain the technical details, but this inexplicable nature gives them power to sell the technology as a miracle solution — like we saw in the Netherlands and Denmark. Nobody really knows what’s happening inside the black box, but we assume it’s objective.

Good research has shown what’s actually in the black box. It’s not magic. We see data annotators struggling to keep subjective and objective separate, data scientists trying to understand the messy reality of healthcare, and negotiating with doctors to make things work. Algorithms aren’t objective or automatic — they’re full of human judgment and labour.

The promised land of AI

I’ve been watching the hospital for a while and I still wonder where the algorithms are. I ask a nurse: the computer that runs the algorithm is separate from the others. It’s annoying to use, and the algorithm is mostly a “doctor’s toy,” so nurses don’t bother with it.

We often imagine managers install technology, technology reorganises things, and change happens. While this might work elsewhere (spoiler: it doesn’t), it’s especially wrong in healthcare. Healthcare has strong professional cultures and high stakes. At the end of the day, a human is responsible for the decision.

Studies show work doesn’t stop when algorithms arrive — it transforms. Doctors don’t just follow what AI tells them, and nurses in newborn units or psychiatric wards don’t either. Medical facts aren’t pulled straight from an algorithm — they’re built in relation to work professionals already do. If the technology doesn’t recognise this, professionals have to fill the gap themselves (we call this “repair work”). Again, this changes work but also power dynamics. For example, Elish & Watkins studied a sepsis algorithm and found nurses had to chase down doctors to remind them about flagged cases, speak up against doctors, and manually piece together cases to make sense of algorithm output. Algorithms don’t just change work — they change what it means to be a nurse and what healthcare is supposed to provide.

Breaking the fairy tale

From the examples discussed, I hope to have convinced you of two points.

Firstly, technology doesn’t just appear. It comes from industry pressure, government decisions, social trends, and power dynamics. Agema’s €800 million investment is a perfect example. Believing deterministic promises stops us from seeing who creates it and who is subject to its changes.

Secondly, technology isn’t neat or planned. It involves improvisation, pushback, and the whole organisation — not just managers. The invisible “data work” is done by medical scribes, documentation specialists, data workers, data scientists, nurses, and doctors. If we don’t recognise this work, we risk expecting too much from the system and recognising too little the humans who make it actually work.

This blog is produced as a part of the Reshaping Work Fellowship Programme. The opinions and views expressed in this publication are those of the author. They do not purport to reflect the opinions or views of Reshaping Work or organisations that have supported the programme.

This article was first published on Reshaping Work Insights on November 19, 2025.

© Image: Unsplash/Piron Guillaume

More results /

/ algosoc
Europe wrote the AI rulebook. Can it deliver on its ambitions?

By Natali Helberger • José van Dijck • Claes de Vreese • October 27, 2025

Designing a ‘fair’ human-in-the-loop

By Isabella Banks • Jacqueline Kernahan • October 02, 2025

/ justice
The EU AI Act: Law of Unintended Consequences?

By Sabrina Kutscher • July 02, 2025

The rise of technology courts

By Natali Helberger • March 06, 2025

/ health
Labour politics and why AI hasn't 'fixed' healthcare

By Martijn Logtenberg • November 20, 2025

AI won’t save us from the hard choices in healthcare

By Maurits Kaptein • June 06, 2025

From dialogue to decision

By Leonie Westerbeek • November 22, 2024

/ media
1 in 10 Dutch citizens are likely to ask AI for election advice. This is why they shouldn't

By Ernesto de León • Fabio Votta • Theo Araujo • Claes de Vreese • October 28, 2025

Platform observability and content governance under the DSA

By Charis Papaevangelou • Fabio Votta • September 22, 2025

Subscribe to our newsletter and receive the latest research results, blogs and news directly in your mailbox.

Subscribe