menu
close

OpenAI Ex-Scientist Planned Bunker for Post-AGI World

Former OpenAI chief scientist Ilya Sutskever proposed building a doomsday bunker to protect researchers from potential dangers following the creation of artificial general intelligence (AGI). The revelation, detailed in Karen Hao's new book 'Empire of AI,' highlights Sutskever's deep concerns about AGI's existential risks, which ultimately contributed to his departure from OpenAI and the founding of Safe Superintelligence Inc.
OpenAI Ex-Scientist Planned Bunker for Post-AGI World

In the summer of 2023, during a meeting with OpenAI researchers, then-chief scientist Ilya Sutskever made a startling declaration: "We're definitely going to build a bunker before we release AGI." This revelation, first reported in Karen Hao's recently published book "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI," offers a glimpse into the apocalyptic concerns harbored by one of AI's most influential figures.

Sutskever, who co-founded OpenAI and served as its chief scientist until May 2024, believed researchers would need protection once they achieved artificial general intelligence—AI systems with human-level cognitive capabilities. According to sources cited in Hao's book, Sutskever's bunker proposal had a dual purpose: to shield key scientists from the geopolitical chaos that might follow AGI's release and potentially serve as a staging ground to influence how superintelligent systems would evolve.

"There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture," one researcher told Hao, referring not metaphorically but to a literal world-changing event. Sutskever reportedly assured colleagues that entering the bunker would be "optional," but his matter-of-fact approach to such extreme preparations alarmed many within the organization.

The bunker proposal emerged amid growing tensions at OpenAI over the company's direction. Sutskever and others worried the organization was prioritizing commercial expansion over safety protocols—concerns that eventually led to the failed attempt to oust CEO Sam Altman in November 2023. Following Altman's reinstatement, Sutskever left OpenAI in May 2024 to found Safe Superintelligence (SSI) with Daniel Gross and Daniel Levy.

SSI, which has raised $3 billion and reached a valuation of $32 billion as of April 2025, represents Sutskever's continued commitment to AI safety. Unlike OpenAI's diversified approach, SSI focuses exclusively on developing safe superintelligence, with Sutskever stating, "Our first product will be the safe superintelligence, and we won't do anything else until then."

The contrast between Sutskever's cautious approach and Altman's more optimistic view highlights the ideological divide within the AI community. While Sutskever prepared for potential catastrophe, Altman has suggested AGI will arrive with "surprisingly little societal impact." As the race toward superintelligence accelerates, these competing visions continue to shape how humanity approaches what could be its most consequential technological development.

Source: Naturalnews.com

Latest News