Swedish Employment Agency Halts Chinese AI Pilot Amid Security and Governance Concerns

Sweden’s Public Employment Service (Arbetsförmedlingen) has abruptly shut down an internally deployed AI language model of Chinese origin, raising serious questions about digital governance, national security, and leadership oversight in the public sector.

According to multiple sources cited by Aftonbladet, the agency had installed a powerful large language model (LLM) on its own servers for internal use. The system, reportedly in operation for some time, was terminated immediately after Director General Maria Hemström Hemmingsson became aware of its existence.

The shutdown underscores growing tensions between innovation ambitions and security obligations—particularly for government agencies handling sensitive personal data.

A Pilot Without Executive Oversight

The AI system was reportedly introduced as part of an internal pilot project within the agency’s IT department. Multiple sources describe the initiative as an attempt to position the Employment Service as technologically advanced and competitive in the rapidly evolving AI landscape. However, the project appears to have moved forward without the knowledge or explicit approval of the Director General, a governance lapse that has drawn sharp internal criticism.

“This is extremely thoughtless. It signals a fundamental lack of understanding of the security implications involved,” one informed source told Aftonbladet.

In a written statement, press manager Hans G. Larsson confirmed the existence of the pilot:

“Yes, the Swedish Public Employment Service has had an internal LLM service running in the form of an internal pilot. It is not in active operation and was closed down on 1 December on behalf of the Director General as soon as she was informed.”

The agency declined to comment further, citing an ongoing internal investigation.

The Model: Alibaba’s Qwen 3

According to sources, the AI system in question was Qwen 3, a large language model developed by Alibaba, one of China’s largest technology companies. In public documentation, Qwen is described as powerful, cost-effective, and particularly strong in multimodal capabilities, including image interpretation.

While the model itself is open-source, its training data, governance structure, and geopolitical context raised alarms internally.

Several employees reportedly reacted strongly to the choice of a Chinese-developed model—especially given that, when queried directly, the system itself allegedly warned against use by government authorities.

“It has been unclear what data this model was trained on. In theory, sensitive information about job seekers could be exposed or indirectly transferred outside Sweden,” said one source.

Although the model was reportedly run on the agency’s own servers, experts note that data leakage risks are not limited to external API calls. Model behaviour, updates, embedded dependencies, and staff interaction patterns all present potential vulnerabilities.

Sweden’s Arbetsförmedlingen’s faces a challenge in adopting AI as it is being forced to navigate suppliers with Chinese ownership | Ganileys

A Contradiction in Security Policy

The episode is particularly striking given Arbetsförmedlingen’s recent legal victory in excluding suppliers with Chinese ownership from a computer procurement process. The Administrative Court of Appeal in Stockholm ruled in favor of the agency, citing national security considerations.

That decision reinforced Sweden’s increasingly cautious stance toward Chinese technology in critical public infrastructure. Against this backdrop, deploying a Chinese-developed AI model—even internally—appears contradictory.

For observers, the contrast highlights a broader issue: procurement rules for hardware are often stricter and clearer than those governing software, AI models, and internal experimentation.

Links to Management Dismissals

The AI pilot also intersects with broader turmoil within the agency. Just days before the AI shutdown became public, three senior managers and one employee were dismissed following repeated whistleblower reports of potential irregularities.

All four individuals worked within the IT department, including IT Director Krister Dackland, the agency’s most senior technology executive.

Aftonbladet has previously reported that internal dissatisfaction surrounding an AI initiative designed to aggregate data on job seekers—and potentially their relatives—contributed to the management shake-up. According to sources, the deployment of the Chinese language model was also a factor in the actions taken against the dismissed individuals.

Analysis: Innovation Without Guardrails

For Nordic business leaders and policymakers, the case offers several key lessons:

  1. AI governance is now a board-level issue
    Experimental deployments of powerful AI systems can no longer be treated as isolated IT pilots. They require executive oversight, legal review, and security assessment from the outset.
  2. Geopolitics applies to software, not just hardware
    Open-source availability does not eliminate strategic risk. The origin, ecosystem, and long-term control of AI models matter—especially for public institutions.
  3. Speed-to-innovation must not outrun compliance
    The desire to “keep up” in the AI race can lead to shortcuts that ultimately damage institutional credibility and trust.
  4. Public trust is fragile
    Agencies handling sensitive personal data face a higher standard. Even theoretical risks can have real reputational consequences.

As governments and enterprises across the Nordics accelerate AI adoption, Arbetsförmedlingen’s experience serves as a cautionary tale: technological ambition without robust governance can quickly become a liability.

Leave a Reply

Your email address will not be published. Required fields are marked *