Skip to main content

Article

From hype to practice: how healthcare is really using AI

Gisele Schout

March 11, 2026


During a joint session organized by AIC4NL and Reaktor, healthcare professionals, researchers, and technologists came together to discuss AI in everyday healthcare practice. The common thread: AI is no longer a distant promise, but adopting it requires realism, governance, and a clear vision of the role of humans.

AI in the consultation room: longer than we think

Patients have been using Google or information platforms such as Thuisarts for years to look up their symptoms. Now they also ask ChatGPT or Claude. Healthcare staff do the same behind the scenes: quickly checking something, validating information. Jasper Wognum, CEO at AI Salon and one of the speakers of the day, described this as the starting point.

“AI isn’t something that still needs to arrive; it’s already here.”

What is changing is the nature of that AI. Jasper explained the difference between generative AI – reactive systems where you ask a question and receive an answer – and agentic AI, where systems independently plan and execute tasks while collaborating with other systems. That second step is far more impactful and requires greater care. 

At the same time, he added nuance to the hype. Large technology companies such as OpenAI and Anthropic are increasingly focusing on the healthcare sector, and media outlets often turn this into spectacular stories. But not everything presented as a breakthrough actually proves to be one in practice.

Risks healthcare must take seriously

AI is particularly strong at repetitive work: translations, summarizing documents, and generating reports. But there are limits. A computer cannot deliver a terminal diagnosis with the human sensitivity that a physician has.  This was one of the risks Jasper mentioned, alongside bias in training data, privacy concerns, and the risk of de-skilling: if healthcare professionals outsource more and more tasks to AI, they may lose knowledge and skills themselves.

The ecological and financial costs of AI were also discussed. The computing power required by large models is significant and is often underrepresented in discussions about AI in healthcare.

The CAIRE Lab: from research to clinical practice

Alexander van Someren, Technical Lead at the CAIRE lab at LUMC, explained how his lab translates AI into clinical reality. The lab operates along four pillars: direct patient care, scientific research, education for the next generation of doctors, and valorization. This broad approach is intentional: AI in a hospital only works if it is embedded across multiple domains simultaneously, rather than being treated as a standalone project in a single department. 

Researcher on AI use: more widespread than expected, but rarely discussed

Physician-researcher Lodewijk Pet, who studies the quality and integrity of biomedical research, uses AI daily. He builds agents for specific tasks ranging from managing his calendar to analyzing large volumes of literature.

His research among PhD candidates shows an interesting pattern: AI was used more frequently than expected, yet it was hardly discussed.

There appears to be an unspoken norm where using AI for text correction is accepted, but having AI write an entire scientific paper is considered unethical.

His recommendation: adapt research methodology to what AI can do. Structure data collection so that small, local models can process the information efficiently, instead of trying to automate existing workflows afterward.

GPT-NL: a European answer to American dominance

Another speaker discussed GPT-NL, the Dutch language model that recently won the Privacy Award. The speaker explained why the Netherlands and Europe do not want to be fully dependent on American or Chinese technology: the issue is autonomy, transparency in training data, compliance with GDPR, and fairer treatment of copyright.

GPT-NL is the first language model in Europe that is fully GDPR-compliant. The model does not aim to imitate human intelligence, but to perform practical tasks in the workplace: searching internal documents, reducing administrative burdens, and acting as a connecting layer between data sources.

A concrete example: KPN uses the model to analyze enormous volumes of chat logs between customers and employees without sending sensitive data abroad. For exactly these kinds of applications, a secure Dutch model is essential – especially in healthcare.

AI Life Clinic: prevention as the starting point

The second half of the afternoon focused on workshops, including one from AI Life Clinic: a holistic AI companion designed to support citizens in improving their lifestyle.

The idea is to bring together the fragmented care surrounding individuals by placing the citizen at the center. General practitioners, community teams, mental health services, hospitals, and municipalities connect around the needs of the individual, allowing them to improve and personalize their specialized services.

Using the theme of obesity, participants worked from different perspectives on the concept. The AI can recognize patterns before problems escalate, for example, by detecting that someone has been moving less for an extended period and suggesting a small, low-threshold intervention, such as walking to an appointment or joining a free activity nearby.

The discussions revealed sharp tensions. Community teams see opportunities for dashboards that show neighborhood-wide trends. General practitioners see possibilities for improved triage.

But the citizen raises the crucial question:  who decides what a “good life” is?

Health insurers warn about the risk that constant health monitoring could lead to hypochondria.

One notable suggestion: perhaps the AI companion should be less friendly, precisely to prevent people from forming a stronger bond with their app than with real people in their surroundings.

Conclusion: AI is already here; the question is how we steer it

The session made one thing crystal clear: the debate about AI in healthcare is no longer about whether it will arrive. It already has.

The challenge is embedding it responsibly, with attention to privacy, bias, de-skilling, and the preservation of human contact.

Whether it is a GDPR-compliant Dutch language model, an AI companion that helps prevent obesity, or an agent analyzing literature for a PhD student, the same lesson keeps emerging: technology only works when governance, ethics, and human considerations are designed into it  not as an afterthought, but from day one.


Related content

From hype to practice: how healthcare is really using AI

From hype to practice: how healthcare is really using AI

  • Article
AI in healthcare: from tasks to transformation

AI in healthcare: from tasks to transformation

  • Article
Reaktor and ZAP Surgical win renowned Red Dot Award for ZAP-AXON

Reaktor and ZAP Surgical win renowned Red Dot Award for ZAP-AXON

  • Announcement
Reaktor Health advises Terveystalo on strategic investment in MedHelp

Reaktor Health advises Terveystalo on strategic investment in MedHelp

  • Announcement
Beyond chatbots – How generative AI transforms customer service

Beyond chatbots – How generative AI transforms customer service

  • White Paper
Building a nationwide healthcare data network in the Netherlands

Building a nationwide healthcare data network in the Netherlands

  • Article
Reaktor and Gosta Labs partner to reduce administrative burden in Dutch healthcare with AI assistant

Reaktor and Gosta Labs partner to reduce administrative burden in Dutch healthcare with AI assistant

  • Announcement
Terveystalo and Reaktor develop digital solutions for Finland's public healthcare

Terveystalo and Reaktor develop digital solutions for Finland's public healthcare

  • Announcement