AI Competition Accelerates Across Military, Jobs, Web and Security
Cover image from businessinsider.com, which was analyzed for this article
Nations escalate competition in advanced AI development, raising cybersecurity and strategic concerns. Vulnerabilities in OS exposed by models like Anthropic's Mythos prompt global warnings. Partnerships form to address risks amid rapid innovation.
PoliticalOS
Sunday, April 12, 2026 — Tech
The AI race is not a single contest but a convergence of military, economic, informational and security pressures that no nation or company can manage in isolation. Competition is delivering genuine capability gains, yet it is simultaneously surfacing vulnerabilities in operating systems, labor markets and the open web that require coordinated standards rather than unilateral acceleration. The most important understanding is that meaningful guardrails, transparency requirements and shared infrastructure for safety testing must advance in parallel with the technology itself if the net outcome is to remain positive.
What outlets missed
Most outlets examined isolated slices of the AI competition but rarely connected military autonomy programs with labor-market data, web-ecosystem strain and newly disclosed cybersecurity vulnerabilities. AI models including Anthropic's Mythos exposed multiple zero-day flaws in widely used operating systems in early 2026, triggering formal alerts from the U.S. Cybersecurity and Infrastructure Security Agency and prompting accelerated international information-sharing agreements that received almost no coverage. Emerging public-private partnerships, such as expanded U.S.-UK testing infrastructure for dangerous capabilities and industry-wide commitments to watermarking synthetic content, were omitted despite their direct relevance to mitigating race dynamics. Outlets also underreported U.S. advantages in foundational chip design and the fact that many advertised autonomous weapons still require human confirmation for lethal force, softening the nuclear-analogy narrative. Finally, verifiable net job creation in AI-adjacent fields and measurable improvements in several companies' crawl-to-refer ratios over the past nine months were minimized or ignored, leaving readers with an incomplete risk-benefit picture.
AI Companies Devour the Web and Jobs as Global Powers Race Toward Autonomous Warfare
As artificial intelligence reshapes nearly every corner of society, its costs are landing hardest on those least equipped to bear them. American college graduates are confronting the worst entry-level job market since the pandemic, with underemployment hitting 42.5 percent, the highest level since 2020. At the same time, AI developers are strip-mining the internet for data while returning almost nothing of value, and the technology is accelerating a dangerous military competition between the United States, China, and Russia that officials quietly compare to the dawn of the nuclear age.
Gillian Frost, a 22-year-old quantitative economics major at Smith College, has applied to more than 90 jobs since last September. She spends entire weekends on applications, only to be ghosted by a quarter of employers and auto-rejected by more than half. A handful of interviews have led nowhere, with many companies failing to send even basic rejection notices. “I feel helpless,” Frost told The Guardian. “No one seems to know how best to prepare due to the unique conflux of events occurring. How do you prepare for a tight labor market coinciding with the emergence of AI and direct US involvement in war?”
Her experience is not isolated. Jeff Kubat, 31, returned to school for a master’s in accounting after eight years in accounts payable. He expected the degree to open doors. Instead he has found the same barriers. The combination of economic caution, shifting employer demands, and AI tools that can now draft reports, analyze data, and generate proposals has shrunk the rung of the ladder that used to welcome new graduates. Employers who once hired juniors to perform routine cognitive work increasingly hand those tasks to chatbots, leaving young workers with fewer ways to gain the experience that leads to better roles.
This displacement is not accidental. It flows directly from the business model of the AI industry, which consumes vast amounts of online content to train and improve its systems but contributes almost nothing back. Cloudflare, which handles traffic for roughly 20 percent of the internet, has published striking data on what it calls the crawl-to-refer ratio. The numbers reveal how much AI companies take versus how often they send human users back to the original websites.
Anthropic, the company that markets itself as the ethical alternative in artificial intelligence, leads the pack with a ratio of 8,800 to 1. Its bots crawl pages 8,800 times for every single referral sent. OpenAI follows at 993 to 1. By comparison, Microsoft, Google, and DuckDuckGo appear far more balanced. The pattern upends the implicit bargain that sustained the web for decades: creators publish material, search engines and social platforms send traffic in return, and everyone benefits. AI chatbots break that bargain. They read, summarize, and repackage content without driving visitors, starving the publishers, journalists, and artists whose work makes the systems possible.
The hypocrisy is particularly glaring at Anthropic. Led by Dario Amodei, the company has cultivated an image of responsibility that has attracted users and investors wary of less scrupulous competitors. Yet its crawling behavior suggests the priority remains scale above all else. When even self-described ethical leaders extract value at such disproportionate rates, the entire industry’s claims about societal benefit deserve scrutiny.
Enterprises are noticing. A growing divide has opened between the most powerful “frontier” models controlled by a handful of companies and the practical needs of businesses. OpenAI and Anthropic insist they do not train on enterprise data passed through their APIs, but both have faced repeated copyright lawsuits and questions about data practices. For companies handling sensitive intellectual property or customer information, the risk is unacceptable. Instead, many are turning to open-weight models released in recent months by Google, Microsoft, Alibaba, and Nvidia. These systems, once dismissed as research curiosities, have matured into serious enterprise tools. IDC research director Andrew Buss described the shift: “We’ve moved from interesting to now serious enterprise platforms.”
The same technology driving job scarcity and web exploitation is also being rapidly militarized. In September, Chinese forces paraded autonomous drones capable of flying alongside fighter jets, an exhibition attended by President Xi Jinping alongside Vladimir Putin and Kim Jong-un. The display alarmed Pentagon officials, who determined the United States had fallen behind in unmanned combat systems. In response, the defense contractor Anduril accelerated production of its AI-powered Fury drone at a new factory outside Columbus, Ohio, beginning output three months ahead of schedule.
What is unfolding is an arms race in artificial intelligence-guided weapons. Both the United States and China are pouring resources into autonomous systems that can make life-and-death decisions with limited human oversight. Russia is also advancing its capabilities. The parallels to the early nuclear era are uncomfortable but increasingly common in private briefings. The difference is that nuclear weapons required massive state infrastructure, while AI capabilities can be developed by private companies and then absorbed into military programs. The result is a proliferation of powerful tools whose strategic and ethical implications have received far less public debate than their commercial applications.
Young workers like Frost and Kubat are navigating this triple crisis: an economy that no longer needs as many human minds for entry-level cognitive labor, an information ecosystem being hollowed out by the same technology, and great-power competition that treats AI as the next decisive weapon. Previous generations faced discrete shocks. This one confronts economic dislocation, technological upheaval, and geopolitical tension simultaneously.
The AI industry continues to promise transformation and abundance. The evidence on the ground suggests a more unequal reality. Data and creative work are being extracted at massive scale with minimal compensation or traffic returned. Routine professional tasks are being automated, squeezing the pipeline for new graduates. And the most advanced systems are being turned toward military dominance rather than human flourishing. Whether any meaningful guardrails will emerge before the damage deepens further remains an open and increasingly urgent question.
You just read Progressive's take. Want to read what actually happened?