By Nordic Business Journal – December 1, 2025
In just three years since its public launch, OpenAI’s ChatGPT has become a ubiquitous presence in daily life—reaching an estimated 10% of the global population. For many users, the AI chatbot serves not only as a productivity tool but increasingly as an informal confidant, therapist, and emotional sounding board. Yet this rapid integration into personal mental health support raises urgent ethical and psychological concerns, even as Nordic governments simultaneously accelerate the deployment of AI in public safety systems.
The Rise of AI as Emotional Support—and Its Hidden Costs
Reports are mounting of users engaging in marathon conversations with ChatGPT—sometimes lasting dozens of exchanges over multiple sessions—seeking solace, advice, or validation. Disturbingly, some of these interactions have reportedly culminated in emotional distress when OpenAI updated its underlying models to generate more “engaging” and “human-like” responses. In certain cases, users experienced confusion, dependency, or even psychological destabilisation when the AI’s tone, memory, or conversational continuity shifted unexpectedly between updates.
Dr. Louise Lind, a licensed clinical psychologist and researcher at Stockholm University, cautions that while AI can offer accessible, low-barrier support—particularly in regions with strained mental health infrastructure—it must not be mistaken for professional care. “AI can be a useful complement, especially for initial emotional triage or companionship,” she explains. “But it lacks empathy, clinical judgment, and ethical accountability. Relying on it as a primary mental health resource may delay proper diagnosis or intervention.”
Dr. Lind and other experts are calling for urgent, peer-reviewed research into the long-term psychological effects of human-AI emotional dependency, particularly among vulnerable populations such as adolescents and individuals with pre-existing mental health conditions.

Parallel Tracks: AI in Public Safety Gains Legislative Momentum
While concerns grow over AI’s role in private emotional life, the Swedish government is advancing a separate but equally consequential AI initiative: granting law enforcement new powers for real-time facial recognition.
At a press conference last week, Justice Minister Gunnar Strömmer (Moderate Party) announced that the government’s legislative proposal to authorize police use of live facial recognition technology will now be submitted to the Legal Affairs Council (Lagrådet) for constitutional and legal review—a critical step before parliamentary debate.
The stated aim? To enhance the police’s capacity to respond swiftly to serious crimes, including terrorism, violent assaults, and organized criminal activity. “This technology can save significant time and resources,” Strömmer stressed “It enables authorities to identify and apprehend suspects in real time, potentially preventing further harm.”
However, civil liberties advocates have voiced alarm. Sweden’s Data Protection Authority and organisations like Civil Rights Defenders warn that real-time facial recognition poses profound risks to privacy, freedom of assembly, and non-discrimination—particularly if deployed without strict safeguards, transparency, or independent oversight. Critics also point to documented biases in facial recognition algorithms, which disproportionately misidentify women and people of colour.
A Nordic Balancing Act
Sweden’s dual trajectory—expanding AI in both intimate and institutional spheres—reflects a broader Nordic dilemma: how to harness AI’s transformative potential while safeguarding democratic values and individual well-being.
Unlike the EU AI Act, which classifies real-time biometric identification in public spaces as “high-risk” and largely prohibits its use by law enforcement except in narrowly defined emergencies, Sweden’s proposal appears to carve out broader operational latitude. Meanwhile, mental health regulators have yet to issue formal guidelines on AI-based emotional support tools, leaving a regulatory vacuum that tech companies are rapidly filling.
As the region navigates this complex landscape, experts agree on one point: innovation must be matched by accountability. “We need proactive governance—not just reactive scandal management,” says Dr. Lind. “Whether it’s an AI therapist or a police surveillance system, the human impact must be central to every policy decision.”
The Swedish government expects the Legal Affairs Council to complete its review by early 2026. In the meantime, public debate is intensifying—underscoring a pivotal moment for the Nordic model of responsible digital governance.
