Doomsday Clock Ticks Closer as AI Risks Span Nuclear to Robotic Failures

Cover image from motherjones.com, which was analyzed for this article
Experts warn AI could push toward nuclear disaster or cause 'Terminator'-like failures via glitching and slop content. Viral robot fails signal dangers. Coverage across spectrum urges caution on rapid adoption.
PoliticalOS
Wednesday, April 15, 2026 — Tech
AI has formally entered the highest-level existential risk assessments alongside nuclear weapons, while real-world robot deployments and automated content systems continue to expose gaps between promised capability and actual reliability. These are not abstract future problems; glitches, misclassifications and escalation pathways already appear in restaurants, bomb squads and social platforms. The single most important reality is that meaningful safety gains will require coordinated policy and engineering effort before deployment outruns control.
What outlets missed
Most coverage isolated one risk strand—nuclear symbolism, robot videos or platform moderation—while downplaying how the Bulletin explicitly links them through calls for simultaneous nuclear arms control renewal and binding international AI guidelines to prevent escalation. Outlets largely omitted that many early-stage robot demonstrations are designed to surface exactly these edge-case failures so engineers can iterate, a normal part of development that does not automatically signal imminent Terminator scenarios. The quantitative success of X’s prior 1.7-million-account spam purge in late 2025 and measurable drops in reply spam received almost no attention, leaving readers without a way to weigh collateral damage against platform improvements. Finally, few pieces noted the clock’s recent history of hovering near 90 seconds since 2023, framing the latest move as part of a sustained trend rather than an unprecedented leap.
AI Unreliability Raises Fresh Nuclear Fears as Doomsday Clock Sits at 85 Seconds to Midnight
The Bulletin of the Atomic Scientists moved the Doomsday Clock to 85 seconds before midnight earlier this year, the closest it has ever been to global catastrophe since its creation in 1947. The decision reflected converging dangers from nuclear weapons, accelerating climate change, unchecked artificial intelligence, and the spread of disinformation. University of Chicago physics professor Daniel Holz, who chairs the Bulletin’s Science and Security Board, helped make the call. In a podcast interview with Mother Jones, Holz described the clock not as a counsel of despair but as an instrument of hope. “The whole point of this clock is to, yes, to alarm people, to inform people, but also to demonstrate we can turn back the hands of the clock,” he said. “And we’ve done it in the past, and we can hope to do it in the future. And we must.”
That measured optimism is increasingly tested by real-world evidence that artificial intelligence systems remain fundamentally brittle. In recent weeks, videos of malfunctioning humanoid robots have circulated widely online, often framed as comic relief. Yet AI safety specialists warn the incidents are early symptoms of a technology being rushed into the physical world before its limitations are understood. At a Haidilao hotpot restaurant in San Jose, California, a dancing robot abruptly veered off script, smashing tableware, scattering chopsticks, and sending plates crashing to the floor. Staff eventually dragged the flailing machine outside while customers watched in amusement. Similar failures have surfaced in China, where one advanced Unitree robot kicked its human handler in the groin during a demonstration and another struck a person with a sharp slap.
Roman Yampolskiy, a tenured computer scientist at the University of Louisville, told the New York Post that laughter at these clips misses the deeper warning. “I think these incidents are often treated as funny only because the immediate harm was limited and the context was theatrical,” he said. “People laugh at low-stakes failure. But from a safety perspective, they should also be taken seriously, because they reveal something important: systems that appear polished and entertaining can still behave unpredictably in the physical world.” Yampolskiy views the glitches as potential harbingers of far larger breakdowns once autonomous machines are deployed at scale in factories, warehouses, hospitals, and, eventually, military settings.
The pattern is especially troubling when placed beside the nuclear dimension that helped push the Doomsday Clock forward. AI is already being integrated into early-warning systems, intelligence analysis, and command-and-control infrastructure in several nuclear-armed states. The same unpredictability visible in a dancing robot could, in theory, produce false positives or cascading errors in systems that allow no margin for error. Holz and his colleagues on the Science and Security Board cited precisely this convergence of emerging technologies with existing nuclear risks as a central reason the clock now stands closer to catastrophe than during the height of the Cold War.
Even in purely digital environments, automated systems are demonstrating the same lack of reliability. Since early April, the platform X has conducted an aggressive purge of accounts it classifies as bots, suspending them at a reported rate of 208 per minute. The company, under the direction of product chief Nikita Bier, framed the campaign as essential housekeeping to restore platform integrity. Yet the sweep has also erased thousands of accounts belonging to real people who used pseudonymous “alt” profiles to follow, bookmark, or privately archive content, much of it adult material.
Justin Diego, a social media influencer with more than 600,000 followers on other platforms, lost a secret X account he had maintained since 2024 solely to track OnlyFans creators. The account never posted and broke no visible rules. Actor Tom Zohar posted that years of careful curation had vanished overnight. “Not a single rule was violated mind you, years of curation and accumulation gone in a flash for no reason,” he wrote. Other users reported similar losses, describing the purge as an overcorrection that punished ordinary people for behavior the platform had tolerated for years. X did not respond to requests for clarification on how many genuine users were affected or what technical methods were used to distinguish humans from bots.
The episode illustrates a broader pattern. Whether in restaurant dining rooms or content-moderation algorithms, contemporary AI frequently fails at the boundary between expected and unexpected inputs. When those failures occur in consumer entertainment they produce viral slapstick. When they occur in nuclear decision-making or large-scale infrastructure they could prove catastrophic. Corporate incentives continue to reward speed over caution. Tech executives race to embed ever-more-powerful models into daily life and sensitive government contracts, while regulatory frameworks lag years behind.
Holz insists the Doomsday Clock’s message is ultimately optimistic because past generations have reversed seemingly inexorable trends toward disaster. Arms-control treaties, diplomatic engagement, and public pressure pulled the clock back during previous eras of nuclear tension. The question now is whether similar collective action can be mobilized before AI’s demonstrated unpredictability is fused with the world’s most destructive weapons. Recent robot mishaps and clumsy platform purges are not apocalyptic events in themselves. They are, however, concrete data points showing that the technology many billionaires tout as humanity’s salvation still cannot reliably tell the difference between a dance floor and a dining table, or between a spam bot and a private citizen exercising personal discretion.
As the hands of the Doomsday Clock remain closer to midnight than at any point in its 79-year history, the gap between industry hype and observable performance grows harder to ignore. The public is being asked to accept rapid deployment of systems whose failures, however amusing in isolation, point toward risks that are anything but funny. Turning back the clock, as Holz urges, will require far more than hopeful rhetoric. It will demand serious restraints on how, where, and how quickly artificial intelligence is allowed to proliferate, especially where nuclear command, critical infrastructure, and public safety intersect. The alternative is to keep moving the hands closer to midnight while pretending the glitches are just entertainment.
You just read Progressive's take. Want to read what actually happened?