Technology & Science

Musk–Altman Feud Escalates as AI and Autopilot Death Counts Go Public

On 20-21 Jan 2026 Elon Musk warned “Don’t let your loved ones use ChatGPT,” citing nine alleged deaths, and Sam Altman shot back that Tesla Autopilot is linked to 50+ fatalities, sharpening their legal and public battle over AI safety.

Focusing Facts

  1. Musk’s 20 Jan 2026 X post referenced nine deaths (five suicides) allegedly linked to ChatGPT use.
  2. Altman’s response cited National Highway Traffic Safety Administration data tying Tesla Autopilot to more than 50 deaths and nearly 1,000 crashes as of a 2024 report.
  3. A federal judge last week allowed Musk’s lawsuit demanding US$79–134 billion from OpenAI and Microsoft to proceed to trial in 2026.

Context

Public tech spats rarely shift history, but this one may. In 1965 Ralph Nader’s “Unsafe at Any Speed” and the 1970–78 Ford Pinto fire scandal forced federal auto-safety law; likewise, the 1980s FAA grounding of early fly-by-wire jets rewired aviation oversight. Musk and Altman’s duel thrusts a similar liability question onto two emergent systems—autonomous driving and conversational AI—both operating at consumer scale without mature regulation. Their competing death tallies expose a vacuum: no consensus framework for attributing harm to algorithmic decisions. Over the next century, precedent set in the coming lawsuits and regulatory responses—whether courts treat code more like a product, a service, or speech—will shape accountability across robotics, biotech and other AI-driven domains. The billionaire drama is noise; the signal is the slow crystallisation of safety norms that could echo automobile standards and Federal Aviation Regulations for decades to come.

Perspectives

Outlets amplifying Musk’s warning about ChatGPT harm

e.g., MoneyControlThey stress Musk’s claim that ChatGPT has been linked to multiple suicides and caution readers that the chatbot may be dangerously unregulated. By foregrounding a rival CEO’s allegations while only briefly noting the contested evidence, the coverage risks sensationalising edge-case tragedies and indirectly promoting Musk’s own AI product, Grok.

Outlets spotlighting Altman’s counter-attack on Tesla and Grok

e.g., Hindustan Times, MoneyControl second pieceThey frame the story around Altman’s rebuttal that Tesla’s Autopilot has caused far more deaths than ChatGPT and paint Musk’s criticism as hypocritical. Focusing on Musk’s safety record diverts attention from the wrongful-death lawsuits against OpenAI and lets Altman appear as the reasonable actor without interrogating his company’s own liabilities.

Tech-industry analysis outlets emphasising the broader AI-safety tightrope

e.g., Forbes, TechRadarThey depict the feud as a window into the genuine engineering challenge of keeping both autonomous cars and large language models safe while scaling to millions of users. The ‘engineering dilemma’ framing can obscure corporate self-interest by implying that harms are inevitable trade-offs rather than partly the result of aggressive product launches and profit motives.

Go Deeper on Perplexity

Get the full picture, every morning.

Multi-perspective news analysis delivered to your inbox—free. We read 1,000s of sources so you don't have to.

One-click sign up. No spam, ever.