menu
close

Schmidt Warns AI Arms Race Could Trigger Data Center Attacks

Former Google CEO Eric Schmidt has cautioned that the escalating AI arms race between the U.S. and China could lead to global conflict over data centers and critical resources. In a recent TED Talk, Schmidt presented a hypothetical scenario where nations might resort to sabotage or even physical attacks on rival AI infrastructure to prevent being overtaken in the race for superintelligence. As an alternative to a dangerous 'Manhattan Project' approach to AI development, Schmidt and colleagues propose a framework called Mutual Assured AI Malfunction (MAIM) to deter unilateral dominance.
Schmidt Warns AI Arms Race Could Trigger Data Center Attacks

The global race to achieve superintelligent AI systems is creating geopolitical tensions that could escalate from technological competition to actual conflict, according to alarming warnings from former Google CEO Eric Schmidt.

In his May 2025 TED Talk, Schmidt outlined how the AI arms race between the United States and China increasingly resembles the nuclear standoff of the Cold War era. He warned that if one nation begins to pull ahead in developing superintelligent systems, the trailing country might resort to increasingly desperate measures—including sabotage or even bombing data centers—to prevent being permanently overtaken.

Schmidt highlighted that China's open-source AI development approach poses a strategic risk to the U.S., which currently favors closed, proprietary AI models. "Because China shares its AI advances openly, the U.S. benefits from them but risks falling behind in a global open-source race," Schmidt explained. This dynamic could intensify competition where the first country to achieve superintelligence gains irreversible dominance through network effects.

Rather than pursuing a dangerous 'Manhattan Project' approach to AI development, Schmidt and co-authors Alexandr Wang and Dan Hendrycks proposed in their March 2025 paper a framework called Mutual Assured AI Malfunction (MAIM). This cyber-centric deterrence model, inspired by Cold War principles, would establish that any aggressive bid for unilateral AI dominance would trigger preventive measures by rivals.

"What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure," Schmidt and his colleagues wrote. The stakes are existential—scenarios range from cyber sabotage targeting AI infrastructure to preemptive strikes resembling Cold War brinkmanship.

While some critics, including National Security Advisor Evelyn Green, argue that MAIM lacks enforceable mechanisms compared to nuclear non-proliferation treaties, Schmidt maintains that deterrence, combined with transparency and international cooperation, offers the best path forward in managing the unprecedented risks of superintelligent AI systems.

Source: Naturalnews.com

Latest News