Back to Signals Desk
Signals Desk // ai-newsVerified Brief

The Rise of AI Populism: Sanders' Moratorium Bill Exposes a Rift in the Safety Movement

A bill from Sen. Sanders and Rep. Ocasio-Cortez to pause AI development, while unlikely to pass, reveals a deep rift in the AI safety movement. It pits traditional advocates focused on long-term exist

AI安全政策法规
The Rise of AI Populism: Sanders' Moratorium Bill Exposes a Rift in the Safety M

A new bill from Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez, the AI Data Center Moratorium Act, calls for a halt to the construction of new data centers until the federal government establishes comprehensive AI safety regulations. While the bill has virtually no chance of passing, it serves as a prism, refracting the growing tensions within the AI safety community: traditional advocates focused on long-term existential risks are diverging from populist progressives concerned with immediate socioeconomic problems.

A 'Progressive Policy Grab Bag'

The Sanders-AOC bill is ambitious. By pausing data center construction, it aims to force legislators to address three major issues: ensuring AI is “safe and effective,” redistributing the economic gains from AI, and preventing AI from driving up electricity prices. However, critics point out that the bill is more of a “grab bag of progressive policies,” a clumsy attempt to solve several distinct problems with a single, blunt instrument.

The bill’s language is extremely vague. For instance, it never defines what constitutes “safe” and “effective” AI. While it attempts to prevent AI development from moving to less-regulated countries through broad export controls, this is a stopgap measure at best and makes no mention of how to achieve more fundamental international treaties. In essence, it reads more like a “messaging bill” designed to make a political statement than a well-considered piece of legislation.

Existential Risk vs. Immediate Interests: A Battle for the Narrative

The bill’s most significant aspect is what it reveals about the power structure within the emerging “anti-AI coalition.” The “catastrophic risks” that worry the traditional AI safety community play only a minor role. As analyst Anton Leicht notes, “Environmental and labor groups have more lobbying power and a larger voter base than catastrophic risk advocates, so their issues get prioritized in any trade-offs.”

This division is evident in the statements from Sanders and AOC. Sanders, a recent convert to “AI doom,” spoke at length about existential risk when the bill was released. While AOC also used the word “existential,” she was referring to pressing social issues like deepfakes and soaring electricity costs—not the “we could all die” interpretation. As Sanders prepares to pass the torch of left-wing populism to AOC, it’s a real question whether the focus on AI’s existential risk will be marginalized entirely.

Awkward Allies in a Populist Wave

The friction between traditional AI safety advocates and the new AI populists is becoming more public. Faiz Shakir, an adviser to Sanders, recently accused traditional AI safety proponents of being “too close to AI developers” and deliberately distinguished them from those with a “stronger case for a moratorium.” In primaries in North Carolina and California, safety advocates also found themselves at odds with progressives who were dismissive of existential risk.

This presents a dilemma for groups dedicated to mitigating the long-term risks of AI. In the current political climate, hoping for elite technocrats to implement robust solutions may be naive. Riding the populist wave might be the only way to bring the issue of existential risk into the public discourse. However, that would require building a durable coalition that cares about both catastrophic and short-term risks. For now, the foundation of such an alliance looks very unstable.

Industry Impact: The Long Road to a Safety Consensus

The Sanders-AOC bill marks a shift in the AI governance debate, moving it from the small circles of tech and policy elites into the broader, more complex arena of public politics. For AI professionals and companies, this means future regulatory pressure will no longer be a one-dimensional issue of technical safety. Instead, it will be a complex challenge blending demands for labor rights, environmental protection, and economic fairness.

The power to define AI “safety” is now being contested. A consensus that can unite the various factions appears to be nowhere in sight. How the industry navigates this increasingly fragmented debate will be a critical test for all involved.

Citations and source links