OpenAI Pushes AI Shields, Buys Talk Show Amid Leadership Scrutiny

OpenAI Pushes AI Shields, Buys Talk Show Amid Leadership Scrutiny

Cover image from slate.com, which was analyzed for this article

OpenAI faces reported internal tensions and leadership drama while acquiring a Silicon Valley talk show and lobbying for legislation shielding AI firms from liability over harms like mass deaths or disasters. Its chief scientist claims models are nearing human research intern proficiency. These moves highlight OpenAI's expansion amid regulatory scrutiny.

PoliticalOS

Friday, April 10, 2026Tech

4 min read

OpenAI is simultaneously claiming major strides toward autonomous research AI, buying influence in tech media, and lobbying for liability limits on catastrophic harms as it eyes an IPO. These actions occur against a backdrop of documented internal turbulence and a New Yorker profile that raises fundamental questions about Sam Altman's trustworthiness with world-altering technology. The single most important reality is that capability advances are outpacing both internal governance and external regulation, leaving critical questions about accountability unanswered.

What outlets missed

Most outlets isolated one thread—internal drama, the talk-show purchase, capability claims or the liability bill—without showing how they form a coherent expansion strategy under regulatory pressure. Few mentioned that SB 3444 requires published safety reports and non-reckless behavior for protection, that it sunsets upon federal alignment, or that OpenAI frames the bill as risk-reducing rather than purely defensive. Coverage of Pachocki's statements frequently misattributed sources and skipped his explicit caveats that full autonomy for alignment work is not expected this year. TBPN's projected $30 million revenue, sponsor list, and OpenAI's commitment to editorial independence received minimal attention, making the acquisition appear more whimsical than calculated. The New Yorker profile's basis in extensive documentation of alleged board concerns was often reduced to personality clashes or ignored in favor of sensational framing, while OpenAI's testimony emphasizing U.S. innovation leadership and harmonization went largely unquoted.

Reading:·····

OpenAI Demands Legal Shield as Its AI Approaches Autonomous Research Power

OpenAI is backing legislation in Illinois that would largely protect the company from liability if its artificial intelligence systems contribute to mass casualties or billion-dollar disasters. At the same time the company is touting rapid advances that bring its models closer to functioning like independent researchers, its celebrity CEO Sam Altman continues to face questions about stability and control at the organization he has repeatedly reshaped in his own image. The moves come as OpenAI prepares for an initial public offering and spends millions acquiring a Silicon Valley talk show in what critics see as an expensive effort to shape public perception.

The legislation, Illinois Senate Bill 3444, would shield developers of so-called frontier AI models from lawsuits over “critical harms” including the deaths or serious injuries of one hundred or more people, damages exceeding one billion dollars, or the creation of chemical, biological, radiological or nuclear weapons. The protection applies as long as the company did not intentionally or recklessly cause the harm and has published safety, security and transparency reports. A frontier model is defined as any system trained with more than one hundred million dollars in computational costs, a threshold that easily captures OpenAI’s latest offerings as well as those from Google, Anthropic, Meta and Elon Musk’s xAI.

OpenAI spokesperson Jamie Radice said the company supports the approach because it focuses on “reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses of Illinois.” The statement also nodded to the desire for uniform national standards rather than a “patchwork” of state rules. Yet several AI policy experts told Wired the bill is more sweeping than previous measures OpenAI has endorsed, effectively creating a liability shield for the very companies racing to build ever-more-powerful systems.

That race is accelerating, according to OpenAI’s own chief scientist. Jakub Pachocki told the “Unsupervised Learning” podcast this week that recent breakthroughs in coding, mathematics and physics mean the company is “definitely” on track to develop systems capable of working like research interns. Pachocki described the key metric as how long a model can operate autonomously on complex, multi-step technical tasks. OpenAI’s public goals include an “AI research intern” level of performance by the end of 2026 and a fully autonomous researcher by 2028. The company’s top executives increasingly speak of these systems taking on independent work with minimal human supervision.

Such claims arrive alongside persistent questions about whether the people steering OpenAI are equipped to manage technology of this magnitude. A recent New Yorker profile detailed the chaotic chapter in which Altman was briefly ousted as CEO only to be reinstated days later by a board that was itself largely replaced. Since returning, Altman has consolidated power and steered the organization away from its original nonprofit roots toward a more conventional profit-driven model. On The Vergecast this week, hosts David Pierce and Nilay Patel described Altman as “an exceedingly normal businessman” while debating whether AI’s potential consequences require something more than normal business leadership. The answer, they suggested, depends on how seriously one takes the warnings about transformative and potentially destabilizing technology.

Against this backdrop, OpenAI has paid millions of dollars to purchase “TBPN,” a streaming talk show focused on Silicon Valley culture and technology. Slate’s “What Next TBD” podcast examined the deal under the question of whether it represents sophisticated public relations ahead of the IPO or simply the latest example of free-spending by wealthy tech executives who answer to no one. The acquisition gives OpenAI direct influence over a platform that reaches exactly the audience most likely to discuss its products, its leadership and its growing political clout.

Critics from across the spectrum have noted the pattern. While OpenAI promotes utopian visions of scientific discovery and economic abundance, it simultaneously lobbies for legal protections that would leave taxpayers, victims and smaller competitors to absorb the costs if things go wrong. The Crooks and Liars website summarized the Illinois bill bluntly: the company appears to want exemption from liability for outcomes as catastrophic as setting off a nuclear incident, provided it can claim it did not mean to do so and had filed the required paperwork.

Illinois lawmakers are now considering whether to hand technology companies this preemptive immunity at the precise moment those companies boast about building machines that could soon operate with researcher-level independence. The bill’s passage would set a precedent other states might follow, further insulating an industry already criticized for moving faster than regulators or the public can comprehend.

For ordinary citizens the stakes are straightforward. The same corporations promising AI will solve humanity’s hardest problems are simultaneously arguing in statehouses that they should not be held responsible if their creations instead contribute to humanity’s hardest disasters. As OpenAI’s own executives describe systems capable of long stretches of unsupervised technical work, the company’s parallel push for legal absolution raises an uncomfortable question about how much confidence its leaders actually have in the technology they are rushing to market. The talk-show purchase, the boardroom drama, the legislative maneuvering and the accelerating capabilities all point to an organization determined to maintain control while limiting accountability. Whether that combination serves the public interest remains an open and increasingly urgent debate.

You just read America First's take. Want to read what actually happened?