Nearly 70% of Nordic companies now permit employees to develop and deploy AI agents—autonomous systems capable of executing tasks and making decisions with minimal human oversight. Yet a stark disconnect persists between corporate ambition and operational reality, according to a new global survey by EY.
The findings, drawn from 975 senior executives across 21 countries—including 120 from the Nordic region—reveal a troubling gap: while leadership teams express strong commitment to responsible AI, their actual governance capabilities remain underdeveloped. This misalignment threatens to undermine both the strategic benefits and regulatory compliance of AI adoption as the EU’s landmark AI Act approaches full enforcement in 2026.
High Principles, Weak Controls
On paper, Nordic firms lead in ethical intent. A striking 87% report having established clear principles for AI use. But when it comes to implementation, the numbers falter: only 64% have deployed real-time monitoring systems for AI activities, and just 69% maintain active steering committees tasked with ensuring compliance with those principles.
“There’s a significant gap between perceived governance and actual maturity,” warns Charlotta Kvarnström, Partner and Technology, Media & Telecom Advisor at EY. “Many leaders believe they’re in control—but their oversight mechanisms simply aren’t keeping pace with deployment.”

Critical Risk Areas Remain Unaddressed
EY’s Responsible AI Pulse Survey (Summer 2025) underscores the issue further: on average, Nordic companies apply robust controls in only three out of nine essential AI responsibility domains. These include data privacy, bias mitigation, transparency, human oversight, and incident response—areas that will soon become mandatory under the EU AI Act.
Kvarnström cautions that permissiveness around AI agent development is not inherently risky—if it’s backed by holistic, proactive governance. “Without that foundation, you’re inviting operational, reputational, and regulatory exposure,” she says.
The Rise of Autonomous AI Agents
Unlike conventional large language models that respond to prompts, AI agents operate autonomously within defined parameters. In practice, this means they can manage end-to-end workflows—such as processing invoices by extracting data, cross-referencing contracts, validating amounts, and flagging discrepancies—requiring human intervention only at final approval stages.
“This isn’t speculative anymore,” Kvarnström notes. “AI agents are already embedded in software development, customer service, and finance functions. New agents are being rolled out weekly across the region.”
Productivity Gains Outpace Financial Returns
The upside is real: one-third of Nordic firms report substantial improvements in productivity and operational efficiency, while 41% cite enhanced innovation capacity. Yet tangible business outcomes lag. Only 20% have seen measurable revenue growth from AI, and a mere 13% report realized cost savings.
The disconnect stems from a failure to reengineer core processes around AI—not merely layer it atop legacy workflows. “The theoretical savings exist,” Kvarnström explains, “but capturing them demands organizational redesign, updated roles, and cultural shifts.”
The key, she emphasizes, is integration, not parallel operation: AI agents must be woven into the fabric of business processes, not treated as auxiliary tools.
2026: The Governance Reckoning
The EU AI Act, set to take full effect in 2026, will impose stringent requirements for risk classification, documentation, transparency, and human oversight—particularly for high-risk AI systems. For Nordic companies, this represents a pivotal test of governance readiness.
“We’re already seeing the chasm widen between those who’ve moved from strategy to execution and those still stuck in aspiration,” says Ylva Bergström, Partner in Risk Transformation at EY. “When the Act comes into force, the market will quickly distinguish leaders from laggards.”
What can the future hold
To close the governance gap, EY recommends that Nordic firms:
- Establish cross-functional AI governance councils with authority over deployment, monitoring, and escalation.
- Implement real-time auditing and logging for all AI agent activities.
- Conduct regular risk assessments across all nine responsibility domains.
- Align AI initiatives with specific business outcomes, not just technological novelty.
- Prepare for EU AI Act compliance now, especially in high-risk sectors like finance, healthcare, and critical infrastructure.
As AI agents become ubiquitous, the question is no longer whether to adopt them—but how responsibly to do so. For Nordic businesses, the window to build mature, accountable AI systems is narrowing. The 2026 deadline isn’t just regulatory—it’s strategic.
— Reporting by the Nordic Business Journal, based on EY’s Responsible AI Pulse Survey 2025.
