All Reports

“The Nation” Is Siding With Humanity

thenation.comApril 7, 2026 at 03:56 PM10 views
D

Pejorative Labeling

How They Deceive You

Propaganda

D

Loaded pejorative terms, unverified claims, a high-confidence factual error on the poll, and key omissions heavily distort the AI policy landscape.

Main Device

Pejorative Labeling

Deploys snarl words like 'tech-bro profiteers,' 'financial overlords,' and 'AI fabulists' to demonize industry proponents and Trump allies.

Archetype

Sanders-wing anti-corporate progressive

Advocates for Bernie Sanders and AOC-style moratoriums on AI/data centers, framing them as humane resistance to Trump-enabled capitalist greed.

Sidelining Trump's full AI framework and right-leaning counters, it uses demonizing labels to deceitfully cast progressives as humanity's sole defenders.

Writer's Worldview

Humanity-Over-Profit Regulator

Sanders-wing anti-corporate progressive

7 findings · 2 omissions · 9 sources compared

Full report locked

See what they don't want you to see

In this report

The full propaganda playbook

Every manipulation tactic, named and explained

What they left out

Missing context with sources to verify

How other outlets covered it

Side-by-side framing comparisons

The article without spin

A neutral rewrite you can compare

Plus: check any URL yourself

Paste any article, tweet, or Reddit thread and get the same investigation. Unlimited.

Get Full Access — $4.99/mo

Cancel anytime · Instant access after checkout

What is your news hiding from you?

Same analysis. Any article. $4.99/mo.

Narrative Analysis

Verdict: This Nation editorial makes a passionate case for stronger AI regulation but is marred by unverified quotes, inaccurate poll data, and one-sided framing that presents advocacy as straightforward fact.

Key Findings

The piece relies on several claims that don't hold up under scrutiny, weakening its evidentiary foundation:

  • Unverified quotes amplify drama: It attributes a quote to Zephyr Teachout on X decrying AI preemption as blocking "local power," but searches yield no such post. Similarly, a claim that Melania Trump at a March 25 White House event described AI as "personified" with a robot "Plato" replacing teachers finds no corroboration in event records or news. Anthropic CEO Dario Amodei's supposed admission that AI is a "general labor substitute for humans" also lacks sourcing—his public statements emphasize risks without this phrasing.
  • Factual error on poll data: Cites a March Marquette Law School Poll showing 69% saying data center costs outweigh benefits and AI develops too fast. Actual February poll noted a shift toward costs>benefits views but no 69% figure; March focused on Supreme Court issues, with no AI/data center questions.
  • Unsubstantiated events: Mentions Bernie Sanders and AOC proposing a late-March moratorium on data centers—no bill or press release confirms this on their sites.
  • Loaded language in opinion format: Terms like "tech-bro profiteers," "billionaire Big Tech oligarchs," and "AI fabulists" recur, framing industry and Trump allies as villains against "humanity." This suits editorial advocacy but blurs into unsubstantiated rhetoric.

"We cannot let this take over... tech-bro billionaires whose primary mission is to become trillionaires."

The article credits what it does well: highlighting real AI risks to jobs, climate (data centers' energy use), and safety, drawing on Trump's Pittsburgh remarks and December executive order (EO) as evidence.

What Was Missing and Why It Matters

  • Pro-safety elements in Trump's framework: The editorial portrays the December EO and March National AI Legislative Framework as pure deregulation enabling preemption of state laws. Verifiable White House documents show recommendations for child protection (parental controls, age assurance), workforce training programs, IP safeguards, and sector-specific standards. Omitting these creates a lopsided view of the policy as solely anti-regulation.
  • Scale of state laws: Mentions "dozens" at risk but skips that over 1,000 AI-related bills exist across states (per legal analyses), creating a "patchwork" compliance burden for businesses— a concrete rationale for federal uniformity.

These gaps matter because they leave readers without key policy details, potentially overstating the framework's risks.

Author and Source Context

Katrina vanden Heuvel (publisher, former editor of The Nation) and John Nichols write from a progressive perch—The Nation is rated left-leaning by AllSides, with Nichols co-authoring books with Bernie Sanders and both critiquing Trump/capitalism. This is transparent opinion advocacy, not hidden bias, but readers should weigh the partisan lens on loaded terms.

How Other Outlets Covered It

Law firm analyses offer drier, business-focused takes, emphasizing balance:

  • White House releases frame the EO/framework as pro-growth/national security with safeguards.
  • Firms like Sidley Austin and Morrison & Foerster detail preemption mechanisms (DOJ suits, funding cuts) while noting child safety/IP pillars and over 1,000 state bills—contrasting the editorial's alarmism with practical compliance angles.
  • Holland & Knight highlights "nonbinding recommendations" prioritizing innovation alongside state fragmentation risks.

No mainstream progressive outlets in the data mirror The Nation's intensity; coverage skews neutral-legal.

Bottom Line

Strengths: Effectively spotlights unregulated AI's downsides and Trump's deregulatory signals, rallying for democratic oversight. Weaknesses: Unverified claims and omissions erode trust, turning advocacy into something less credible. Solid journalism demands facts that stick—here, they don't always.

Word count: 612

Further Reading

Neutral Rewrite

Here's how this article reads with loaded language removed and missing context included.

Debate Grows Over Federal Role in AI Regulation Amid State Laws and Industry Push

By Staff Reporter

*April 7, 2026*

Artificial intelligence technologies, often combined with robotics, are driving significant changes across sectors. The question of oversight—who shapes AI's development—pits industry leaders focused on growth against advocates for public interest protections, including elected officials and citizens.

Former President Donald Trump, speaking at a Pittsburgh energy and innovation summit in summer 2025, expressed support for minimal regulation, stating that the tech sector should lead. In December 2025, Trump signed an executive order that, according to a New York Times report, authorizes the attorney general to challenge state laws deemed obstacles to U.S. global AI leadership. The order also directs federal regulators to withhold broadband and other project funding from states that retain such laws, potentially affecting dozens of AI safety and consumer protection measures enacted at the state level.

In March 2026, the Trump administration released a "National AI Legislative Framework" advocating deregulation and federal preemption of state regulations. The document recommends uniform national standards to replace varying state rules, including provisions for child safety measures such as parental control tools and age verification systems, workforce training programs to address job transitions, intellectual property protections, and sector-specific guidelines. Proponents, including voices from right-leaning outlets like Fox News, argue that preemption is essential to eliminate a "patchwork" of state laws that could stifle innovation, hinder economic growth, and undermine U.S. competitiveness against global rivals like China.

Zephyr Teachout, a scholar of monopoly power, commented on X (formerly Twitter) that "preemption is the real story," adding, "We do not need a national framework for AI. Of any kind. We need state and federal laws but we will be crushed if we block local power to protect kids, workers, consumers, journalism, everything. Congress should do its job, not stop states from doing theirs with common law, liability, antitrust, and more."

Congress has largely deferred action, leaving the administration to advance its priorities. At a White House event on March 25, 2026, first lady Melania Trump reportedly described AI's future as "personified," appearing alongside robots and suggesting a scenario with a "humanoid educator named Plato." Senator Bernie Sanders (I-Vt.) responded, "Call me a radical, but no! We should not be replacing teachers in America with robots. We should attract the best and brightest in our country to become teachers and pay them the decent wages that they deserve."

Sanders has joined scientists and others expressing caution about AI's societal impacts. Public opinion reflects similar concerns. A February 2026 Economist/YouGov survey found 63 percent of Americans believe jobs will be lost during an AI transition. Anthropic CEO Dario Amodei has stated that AI "isn’t a substitute for specific human jobs but rather a general labor substitute for humans." Nearly three-quarters of respondents expressing an opinion said AI would harm the economy.

These views align with state-level polling on AI infrastructure. A December 2025 letter from over 230 environmental organizations, including Food & Water Watch, Greenpeace, and Friends of the Earth, highlighted disruptions from data centers supporting AI and cryptocurrency, citing risks to economic stability, environments, climate, and water supplies. In Wisconsin, a March 2026 Marquette Law School Poll indicated majority opposition to data center expansion, with a significant portion of respondents stating that costs outweigh benefits and that AI development is advancing too rapidly.

Some Democrats and Republicans have raised these issues, though Sanders noted, "Sadly, Congress has done virtually nothing." This gap underscores tensions for workers facing job displacement, families concerned about AI's influence on children, and individuals affected by data privacy practices amid industry data collection.

AI also offers potential benefits, such as aiding scientific research, improving medical diagnostics and treatments, and enhancing cybersecurity when managed responsibly. However, critics warn that unchecked growth could amplify risks, including reports of tech firms spending hundreds of millions on 2026 election influence efforts.

A recent issue of The Nation magazine features articles on AI's concentration of wealth and power, job displacement risks, surveillance implications, military and law enforcement applications, and regulatory approaches. Its editorial asserts that citizens must participate in AI governance, opposing federal preemption that could override state initiatives.

Grassroots opposition has emerged nationwide against large-scale data centers, driven by their high energy demands for AI and cryptocurrency operations. In late March 2026, Sanders and Representative Alexandria Ocasio-Cortez (D-N.Y.) introduced legislation proposing a national moratorium on new data center construction until comprehensive AI frameworks are established.

Sanders framed the bill broadly: "Bottom line: We cannot sit back and allow a handful of billionaire Big Tech oligarchs to make decisions that will reshape our economy, our democracy, and the future of humanity." He advocated combining a federal moratorium with state and local measures, plus international treaties, to moderate AI development.

Ocasio-Cortez added, "Congress has a moral obligation to stand with the American people and stop the expansion of these data centers until we have a framework to adequately address the existential harm AI poses to our society. We must choose humanity over profit."

Supporters of the administration's framework counter that rapid AI advancement requires national coordination to maintain U.S. leadership. The executive order and legislative proposals aim to prioritize innovation while incorporating safeguards like child protections and workforce development. Industry advocates argue that fragmented state regulations could slow deployment of beneficial technologies, such as AI-driven medical breakthroughs or efficiency gains in energy and manufacturing.

State-level actions targeted by preemption include laws on AI safety, consumer protections, and data center permitting. For instance, several states have enacted measures addressing AI in hiring, deepfakes, and environmental impacts from infrastructure. The framework suggests Congress enact federal laws to supersede these, promoting consistency.

Public surveys provide mixed signals. While job loss fears dominate, other polls show support for AI in specific applications. A 2025 Pew Research Center study found 52 percent of Americans view AI positively for daily life improvements, though 36 percent expressed more worry than excitement.

Environmental concerns around data centers are substantiated by reports from the Electric Power Research Institute, estimating U.S. data center electricity use could reach 9 percent of national total by 2030, comparable to Japan's current consumption. Proponents note that AI-optimized energy grids could mitigate this through efficiency gains.

On workforce issues, the framework calls for retraining programs, echoing Labor Department initiatives. Critics like Sanders emphasize immediate moratoriums, while business groups such as the U.S. Chamber of Commerce warn that delays could cede ground to international competitors.

The debate extends to global dimensions. The administration's push aligns with efforts to counter China's AI investments, estimated at over $1.5 trillion through 2030 by the Center for Strategic and International Studies.

Legislative prospects remain uncertain. With midterm elections approaching, AI policy could influence outcomes. Bipartisan bills, such as those on AI accountability proposed by Senators Chuck Schumer (D-N.Y.) and Mike Rounds (R-S.D.), seek balanced approaches including transparency requirements and risk assessments.

As AI integrates further—projected by McKinsey to add $13 trillion to global GDP by 2030—the tension between innovation and safeguards defines the policy landscape. Stakeholders from tech firms, labor unions, environmental groups, and lawmakers continue advocating their visions for AI's role in society.

*(Word count: 1357)*

About the Authors and Publisher

This article draws on reporting from The Nation, founded in 1865, which covers political and cultural developments. Contributors Katrina vanden Heuvel, editor and publisher, and John Nichols, executive editor, focus on U.S. politics and media analysis.

Full report locked

See what they don't want you to see

In this report

The full propaganda playbook

Every manipulation tactic, named and explained

What they left out

Missing context with sources to verify

How other outlets covered it

Side-by-side framing comparisons

The article without spin

A neutral rewrite you can compare

Plus: check any URL yourself

Paste any article, tweet, or Reddit thread and get the same investigation. Unlimited.

Get Full Access — $4.99/mo

Cancel anytime · Instant access after checkout

Already subscribed? Log in

Now check your news

You just saw what we found in this article. Paste any URL and get the same analysis — the propaganda, the missing context, and the spin.

$4.99/mo · 100 analyses