For the first time, rigorous empirical evidence confirms what many have long suspected: the very architecture of social media feeds — specifically, the algorithms that curate what we see — can significantly deepen political polarisation in society. Yet this influence is not inevitable. A groundbreaking study published in the journal Science demonstrates that small changes to how content is ordered can meaningfully shift users’ attitudes, offering a roadmap for more constructive digital discourse.
Conducted by researchers at Stanford University during the volatile lead-up to the 2024 U.S. presidential election, the study involved 1,256 active users of X (formerly Twitter). Without the platform’s knowledge or consent — a reflection of growing barriers to independent research on major social media platforms — the team developed a custom browser extension to manipulate the algorithmic ranking of posts in participants’ feeds in real time.
Over the course of one week, users were randomly assigned to one of two conditions: in one group, posts containing hostile rhetoric toward political opponents or anti-democratic sentiments were promoted higher in the feed; in the other, such content was demoted. A control group continued using the standard algorithm unchanged.
The results were striking. Those exposed to more divisive content reported significantly increased animosity toward political outgroups, along with higher levels of anger and sadness. Conversely, participants whose feeds minimised hostile posts showed warmer feelings toward their political opponents — and reported improved emotional well-being overall.
Perhaps most alarming: the magnitude of attitudinal shift observed after just seven days was comparable to the level of polarization typically seen over three years in national U.S. surveys.
“This isn’t a law of nature,” emphasizes Carl Heath, senior digital researcher at RISE and a leading voice on ethical technology in the Nordic region. “Social media doesn’t have to divide us. These effects stem from design choices — and design choices can be changed.”

Heath warns that the implications extend far beyond the United States. “A significant portion of Sweden’s political discourse now unfolds on platforms like X,” he notes. “With the 2026 general election approaching, we must recognise that algorithmic amplification of outrage and division could subtly but powerfully shape voter sentiment, trust in institutions, and even democratic stability.”
Indeed, while X, Meta, and other tech giants continue to optimise their algorithms for engagement — often prioritizing emotionally charged or conflict-driven content — alternative platforms are emerging with fundamentally different philosophies. Services like Mastodon, Bluesky, Diaspora, and even BeReal (in its limited social capacity) prioritize chronological feeds, user control, transparency, and in some cases, open-source governance.
“These alternatives may lack the scale of mainstream platforms,” Heath acknowledges, “but they prove that humane, democracy-affirming design is not only possible — it already exists.”
The Stanford study arrives at a critical juncture. As the European Union finalises enforcement of the Digital Services Act (DSA), and Sweden intensifies its focus on digital resilience ahead of the 2026 elections, regulators, civil society, and citizens alike must ask: Should platforms that systematically amplify division enjoy unfettered access to our public square?
The research underscores a powerful truth: algorithms are not neutral. But they are also not destiny. With informed public pressure, regulatory oversight, and support for ethical alternatives, societies can reclaim social media as a space for dialogue — not division.
The Nordic Business Journal reached out to X for comment on the study and its methodology. At the time of publication, no response had been received.
