Grok AI Misidentified White House Fainter as Novo Nordisk Executive

❌ False

What Was the Claim?

During a White House event in May 2025, a person fainted and required medical attention. Elon Musk's artificial intelligence chatbot Grok was prompted to identify the individual from video footage and claimed the person was a Novo Nordisk pharmaceutical executive. This identification spread rapidly across social media, triggering conspiracy theories linking the fainting incident to the pharmaceutical industry. The identification was entirely false.

How the Misidentification Spread

Screenshots of Grok's response were shared on Twitter and other platforms by users who accepted the AI's identification without verification. Some accounts speculated wildly about pharmaceutical conspiracies, suggesting the fainting was evidence of corporate malfeasance or deliberate poisoning. The false identification gained considerable traction before traditional fact-checkers could intervene with accurate information about the person's actual identity.

Why Grok Made the Error

Grok, like other image recognition systems, is prone to misidentification errors. The system operates by analyzing visual features and matching them against training data. When working with low-resolution video footage from distance, facial recognition systems frequently produce false positives. In this case, the AI appeared to have confused the individual with someone else it had been trained to identify, leading to the confident but inaccurate pharmaceutical industry connection.

The Actual Person's Identity

Official White House sources later released information confirming the identity of the fainting attendee. The person was a White House staffer, not a pharmaceutical company executive, and had experienced a sudden drop in blood pressure exacerbated by the warm event venue. There was nothing suspicious about the incident, and certainly no pharmaceutical industry connection existed.

Broader Implications for AI Verification

This incident highlighted the dangers of relying on AI systems for critical identity verification tasks. PolitiFact and PublicProof both investigated the claim and confirmed Grok's identification was wrong. The incident demonstrates how AI errors can rapidly generate false conspiracy narratives when amplified by social media algorithms.