OpenAI Pushes AI Shields, Buys Talk Show Amid Leadership Scrutiny

Cover image from slate.com, which was analyzed for this article
OpenAI faces reported internal tensions and leadership drama while acquiring a Silicon Valley talk show and lobbying for legislation shielding AI firms from liability over harms like mass deaths or disasters. Its chief scientist claims models are nearing human research intern proficiency. These moves highlight OpenAI's expansion amid regulatory scrutiny.
PoliticalOS
Friday, April 10, 2026 — Tech
OpenAI is simultaneously claiming major strides toward autonomous research AI, buying influence in tech media, and lobbying for liability limits on catastrophic harms as it eyes an IPO. These actions occur against a backdrop of documented internal turbulence and a New Yorker profile that raises fundamental questions about Sam Altman's trustworthiness with world-altering technology. The single most important reality is that capability advances are outpacing both internal governance and external regulation, leaving critical questions about accountability unanswered.
What outlets missed
Most outlets isolated one thread—internal drama, the talk-show purchase, capability claims or the liability bill—without showing how they form a coherent expansion strategy under regulatory pressure. Few mentioned that SB 3444 requires published safety reports and non-reckless behavior for protection, that it sunsets upon federal alignment, or that OpenAI frames the bill as risk-reducing rather than purely defensive. Coverage of Pachocki's statements frequently misattributed sources and skipped his explicit caveats that full autonomy for alignment work is not expected this year. TBPN's projected $30 million revenue, sponsor list, and OpenAI's commitment to editorial independence received minimal attention, making the acquisition appear more whimsical than calculated. The New Yorker profile's basis in extensive documentation of alleged board concerns was often reduced to personality clashes or ignored in favor of sensational framing, while OpenAI's testimony emphasizing U.S. innovation leadership and harmonization went largely unquoted.
OpenAI Seeks Legal Immunity for AI Catastrophes as Its Systems Approach Research Autonomy
OpenAI is accelerating toward artificial intelligence capable of independent research while simultaneously lobbying for sweeping legal protections that would shield the company from liability if those same systems contribute to mass death or economic ruin. The moves, coming as the company prepares for an initial public offering, underscore a pattern of rapid capability gains paired with aggressive efforts to minimize accountability.
On Thursday, OpenAI’s chief scientist Jakub Pachocki told the Unsupervised Learning podcast that recent advances in coding, mathematics, and physics have put the company on track to achieve AI systems that perform at the level of research interns. Pachocki described the key benchmark as the length of time an AI model can operate autonomously on complex, multi-step tasks. OpenAI has publicly set 2026 as the target for “AI research intern” capabilities and 2028 for fully autonomous AI researchers. The implications are profound: systems that require progressively less human oversight could soon tackle sophisticated technical work now performed by highly trained humans.
At the same moment, OpenAI is backing Illinois Senate Bill 3444, legislation that would grant frontier AI developers broad immunity for “critical harms” caused by their models. The bill defines frontier models as those trained with more than $100 million in computational resources, a threshold that comfortably includes OpenAI’s latest systems as well as those from Google, Anthropic, Meta, and xAI. Under the legislation, companies would be protected from liability for outcomes including the deaths or serious injuries of 100 or more people, at least $1 billion in property damage, or the use of AI to develop chemical, biological, radiological, or nuclear weapons. The only exceptions are cases in which the company acted intentionally or recklessly, and even then the firm must simply have published safety, security, and transparency reports.
AI policy experts described the measure as more extreme than previous industry-backed proposals. OpenAI spokesperson Jamie Radice said in a statement that the approach “focuses on what matters most: reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses of Illinois.” The company also expressed hope that the bill would discourage a “patchwork of state-by-state rules” and encourage national standards. Critics see something simpler: an attempt to write the rules before the public fully grasps the dangers.
This legislative push arrives amid persistent questions about OpenAI’s leadership and corporate culture. A recent New Yorker profile detailed the chaotic episode in which Sam Altman was briefly fired as CEO in 2023 only to be reinstated days later after an employee revolt and pressure from major investors. Since returning, Altman has restructured the organization, converting it from a nonprofit-controlled entity into one increasingly oriented toward commercial returns. The same profile and subsequent discussions on outlets such as The Verge have raised a recurring question: Is a serial dealmaker with a history of stretching the truth the right person to steer technology that could reshape society at a fundamental level?
OpenAI’s recent purchase of a Silicon Valley streaming talk show has only heightened skepticism. The acquisition, reported as costing millions, is widely viewed inside tech circles as a PR maneuver to shape the narrative around the company at a time when it faces growing scrutiny. As Slate’s “What Next TBD” podcast asked, is this the behavior of a serious research organization or the indulgence of wealthy executives who believe their own hype? The purchase coincides with OpenAI’s IPO preparations, suggesting the company is focused as much on managing perception and valuation as on the safety implications of systems that are approaching intern-level autonomy.
The contrast is stark. On one hand, OpenAI’s own executives boast of progress toward AI that can reason, code, and investigate with minimal supervision. On the other, the company is pressing for legislation that would largely absolve it of legal responsibility should those systems, or others built on similar foundations, enable catastrophic outcomes. The Illinois bill would apply even if an AI model autonomously commits what would be a criminal offense for a human, provided the company published the required reports and did not explicitly intend the harm.
Such provisions worry safety advocates who argue that frontier AI labs are racing ahead without adequate external oversight. The public has already seen how today’s models can generate convincing disinformation, assist in cyberattacks, or automate aspects of biological weapon design. If OpenAI’s own timeline holds, far more capable systems are only months or a few short years away. Granting legal immunity now, critics contend, removes one of the few remaining incentives for rigorous safety practices.
OpenAI’s dual track of capability acceleration and liability limitation reflects a broader industry posture: extraordinary ambition paired with resistance to external constraints. As the technology edges closer to autonomous research capacity, the company’s willingness to accept commensurate responsibility appears to be moving in the opposite direction. Whether lawmakers in Illinois or Congress ultimately accept this bargain will help determine if the AI era is shaped by meaningful accountability or by the preferences of a small group of executives racing toward an initial public offering.
You just read Progressive's take. Want to read what actually happened?