A European legal scholar warns the practice may breach fundamental rights—yet users must opt-out one chat at a time.
The 30-Second Summary
If you have ever opened Meta AI inside a Messenger thread—even once—the company can legally vacuum up the entire back-catalogue of that conversation to refine its large language model. There is no batch “forget everything I ever wrote” switch. Instead, you must open each individual chat, scroll to settings, and click “Turn off Meta AI.” Miss even one thread, and everything you said before the AI was invoked remains fair game.
How We Got Here
Meta’s AI assistant debuted across Facebook, Instagram and WhatsApp in April 2024. The pitch was simple: a friendly bot that can summarise group chats, re-write messages, or generate memes on demand. What the launch announcement did not highlight was a single line buried in the privacy policy: “When Meta AI is used, we may use the associated conversation data to improve our products.” Translation: if any participant— not necessarily you—tags @MetaAI, the whole thread is ingested.
A Goldmine Disguised as a Feature
Jan Trzaskowski, professor of marketing law at Aalborg University, frames the move as the fourth pillar of Meta’s business model:
- Scale: 3 billion monthly users.
- Stickiness: AI summaries keep people inside Messenger longer.
- Targeting: richer profiles sharpen ad precision.
- Training data: real human dialogue—arguably the scarcest resource in AI—handed over by default.
“Messenger and WhatsApp are the largest corpora of private human conversation ever assembled,” Trzaskowski tells us. “Meta is monetising intimacy itself.”

The Legal Vacuum
European data-protection law says consent must be “freely given, specific, informed and unambiguous.” Trzaskowski argues that pre-ticked boxes buried in sub-menus fail that test. Yet no court has ruled on whether retroactive scraping of a private thread—sometimes years old—violates GDPR. Until a case reaches the Court of Justice of the EU, legality remains an open question.
Inside the Opt-Out Labyrinth
We tested the process on three devices. To disable training:
- Open each Messenger thread.
- Tap the chat name → Privacy & Safety → “Use Meta AI” → toggle off.
- Repeat for every group DM, marketplace chat, and archived thread since 2011.
An average user with 400 active chats would need roughly two hours. There is no “select all,” and WhatsApp remains unaffected—for now.
What Meta Says
In a statement, Meta insists: “Meta AI is optional. We do not train on private messages unless someone in the chat chooses to share with the AI.” The company declined to clarify why opt-out is not global or retroactive.
What Could Change
- A GDPR complaint filed by NOYB in Vienna is seeking emergency orders to force a single, account-level opt-out.
- The EU’s forthcoming AI Act may categorise private chat histories as “high-risk” data, triggering stricter consent rules.
- Regulators in Ireland and Denmark have opened preliminary inquiries.
How to Protect Yourself Today
- Manually disable Meta AI in every Messenger thread (instructions above).
- For sensitive conversations, migrate to Signal or another end-to-end encrypted platform that disclaims AI training.
- File a data-access request under GDPR (Article 15) to see exactly what Meta has already retained.
The Bigger Picture
“Privacy used to be about who can see your data,” Trzaskowski reflects. “Now it’s about who can feed it to an algorithm that learns to mimic you.” Until the courts—or legislators—draw a line, each emoji, break-up text and late-night confession you’ve ever typed remains a potential training sentence in Meta’s next model.
