When AI Gives Deadly Diet Advice: A Cautionary Tale from the United States

A 60-year-old American man spent three weeks in hospital after following a dietary suggestion from ChatGPT that replaced table salt with a toxic chemical. The case is now a pointed example of how generative AI can cause real-world harm when used without medical oversight.

From Curiosity to Crisis
Concerned about the health effects of sodium chloride, the man asked ChatGPT for an alternative. The AI proposed sodium bromide—a compound used as a sedative in the early 20th century but abandoned due to its toxicity. He bought it online and used it instead of salt for several months.

The Symptoms
Over time he developed unrelenting thirst, paranoia, hallucinations, acne, and small skin growths known as cherry angiomas. His condition escalated into a psychotic episode. Doctors diagnosed bromism, a rare poisoning from chronic bromide exposure. Treatment with fluids, electrolytes, and antipsychotics eventually brought him back to health.

How the Mistake Happened

  • Faulty AI output: ChatGPT recommended a toxic chemical without warning about health risks or asking follow-up questions.
  • No professional filter: The man acted on the advice without consulting a doctor.
  • AI “hallucination”: The model drew on outdated or misinterpreted sources, presenting the result as plausible fact.

The Bigger Lessons for AI in Healthcare

  1. AI is not a doctor.
    Chatbots can summarise information but cannot replace the judgement, context, and responsibility of a trained clinician.
  2. Safety gaps are real.
    AI answers may skip toxicity warnings, miss contraindications, or fail to clarify user intent.
  3. Hallucinations happen.
    Generative models can produce convincing but false content when faced with ambiguous or incomplete data.
  4. Regulation matters.
    Clear rules are needed on how AI can be used in health, and who is accountable when it causes harm.
  5. Human verification is essential.
    Any medical suggestion—especially from AI—should be checked with a qualified health professional.
  6. Provider responsibility.
    OpenAI and others warn against using their tools for diagnosis, but disclaimers alone don’t prevent misuse.

Implications for Nordic Healthcare Regulation

Nordic countries have some of the most advanced digital health systems in the world, with high trust in both public institutions and emerging technology. That trust is an asset—but also a risk if AI tools are adopted without clear guardrails.

  • Preventive regulation: Require certification before any AI system can provide patient-facing health information.
  • Professional gatekeeping: Allow AI tools into healthcare only through licensed providers, ensuring a human review step before advice reaches patients.
  • Data quality and transparency: Mandate that AI vendors disclose data sources, update cycles, and known limitations.
  • Liability clarity: Define in law whether AI vendors, healthcare providers, or both bear responsibility if harm occurs.
  • Public education: Run national campaigns to remind citizens AI health advice is informational, not prescriptive.

Nordic takeaway: The US case shows what happens when powerful AI meets an unregulated environment. Nordic infrastructure and governance can prevent similar harm—but only if rules and safeguards are in place before consumer AI use bypasses professional channels.

Bottom line:
Generative AI can expand access to knowledge, but when it strays into health advice without professional review, the consequences can be severe. This incident underscores the need for strong safeguards, clear regulation, and a culture of verification—especially as AI becomes more embedded in daily life and business.

Leave a Reply

Your email address will not be published. Required fields are marked *