Artificial intelligence is heralded by policymakers as a linchpin for modernizing healthcare. Yet frontline doctors, including senior physician Edvard Lekås of the Växjö surgical clinic, Sweden warn that current AI tools disrupt workflows, add to stress, and often fail to deliver expected benefits. As governments double down on ambitious AI healthcare programs, a crucial debate unfolds between optimism for innovation and the perils of premature implementation.
Government Ambition and the Reality on the Ground
In Sweden and across Europe, recent policy goals emphasize an accelerated roll-out of AI across healthcare, viewing it as a remedy for staffing shortages, diagnostics bottlenecks, and rising system costs. Sweden’s government insists the country must excel in healthcare digitization, framing it as a global imperative. This optimism has driven public and private investment into AI-driven triage, diagnostics, and administrative support.
But on wards and clinics, medical staff report a starkly different experience:
- Time-consuming Systems: Doctors like Edvard Lekås recount that AI tools often create more work, not less, by forcing staff to navigate multiple, poorly integrated digital systems and deal with unreliable triage outcomes.
- Added Stress: Physicians worry these systems burden already overstretched workforces. When AI tools malfunction or misjudge patient risk—especially for those with complex medical histories—the result is extra administrative work and risk of safety lapses.
- Tool Maturity Concerns: Many doctors feel they are being made to test “works in progress,” rather than using technology truly ready for clinical application.
“I think it should be ready when it gets to us,” Lekås stated, echoing widespread clinician frustration with tools that shift the burden of technological immaturity onto doctors.

Analysis: Why the Disconnect?
Policy Versus Practice
Decision-makers are swayed by impressive pilot studies and the theoretical potential of AI. For instance:
- Pilot Successes: A Swedish study showed that AI-based triage increased labour productivity among primary care physicians—a result that bolsters the government case for wide AI adoption4.
- Systemic Challenges: Larger, real-world implementations have exposed deep flaws. Many AI systems are poorly calibrated for diverse populations, fail to integrate with hospital records, and struggle with nuanced clinical judgment—setting doctors up for data-entry drudgery rather than liberation.
Doctors’ Accountability and Legal Risk
Physicians remain legally liable for AI errors, even when acting on AI guidance. Recent research highlights a dangerous “accountability gap”: when AI systems make mistakes, it is the doctor—rather than the technology provider or hospital—who bears responsibility. This creates an impossible ethical dilemma, further eroding trust and enthusiasm among clinicians.
Public, Patient, and Political Pressures
Large-scale surveys show public ambivalence, too:
- Most patients want doctors’ experience and intuition to remain central, even when AI tools are available.
- Nearly half of Americans are uncomfortable with providers using AI instead of independent clinical expertise.
Case Study – A Lesson From the UK
The Royal Bolton Hospital in the UK planned to test an AI-based chest X-ray tool to speed up diagnosis. Despite months of committee reviews and urgent demand during the Covid-19 pandemic, the trial was cancelled when real-world events overtook the technology’s readiness. This case highlights the risk of prioritizing speed and hype over genuine clinical validation, and reiterates the importance of robust oversight for AI in high-stakes contexts like healthcare.
Ethical and Practical Roadblocks
- Bias and Inequality: Studies have found that hospital algorithms can inadvertently deepen health disparities, sometimes directing higher quality care toward certain populations while disadvantaging others.
- Transparency: “Black box” algorithms heighten doctor scepticism and reduce accountability, as clinicians are asked to trust decisions they cannot fully understand or verify.
- Integration Nightmares: Outdated and disparate IT systems exacerbate new challenges, especially when AI fails to mesh with existing workflows.
What Would a Responsible Path Forward Look Like?
- Rigorous Validation: AI must be “ready” for practice—validated on diverse, real-world data before deployment.
- Shared Accountability: Clear legal and regulatory frameworks must clarify responsibility, shifting some burden off of individual doctors.
- User-Informed Design: Clinician and patient voices should shape technology priorities, with broad participation in development and procurement decisions1.
- Continuous Review: AI tools require constant post-market monitoring to catch failures, adapt to new data, and account for evolving clinical realities.
Conclusion
AI’s promise in healthcare is real but remains largely unfulfilled at the frontlines, where doctors like Edvard Lekås experience the pitfalls first-hand. Without deeper engagement with clinical needs and stronger safeguards for quality, integration, and accountability, government enthusiasm may continue to collide with justified medical scepticism.
For AI to truly serve patients and providers, the next generation of tools must prioritize readiness, transparency, and collaboration over mere novelty—and ensure that technology is there to help, not hinder, those who care for us.
