Google DeepMind CEO Demis Hassabis believes artificial general intelligence (AGI) will likely arrive just before or after 2030, according to statements made during a major media appearance on May 21, 2025.
During the event, which included a surprise appearance by Google co-founder Sergey Brin, Hassabis discussed the timeline for achieving AGI—widely understood as AI that matches or surpasses most human capabilities. While Brin predicted AGI would arrive just before 2030, Hassabis suggested it might come slightly after, joking that Brin only has to call for the advance to happen, while Hassabis has to figure out how to actually deliver it.
Hassabis, who won a share of the 2024 Nobel Prize in Chemistry for his work on AlphaFold, acknowledged the incredible progress made in AI over the last few years. "We've been working on this for more than 20 years, and we've had a consistent view about AGI being a system that's capable of exhibiting all the cognitive capabilities humans can," he stated. "I think we're getting closer and closer, but we're still probably a handful of years away."
The DeepMind CEO predicted the industry will need a couple more big breakthroughs to reach AGI, noting that reasoning approaches recently unveiled by Google, OpenAI and others may represent part of one such breakthrough. These reasoning models don't respond to prompts immediately but instead perform more computing before producing an answer—"Like most of us, we get some benefit by thinking before we speak," as Brin put it.
During the interview, Hassabis showcased Project Astra, a research prototype exploring breakthrough capabilities for Google products on the path to building a universal AI assistant. Some of Astra's capabilities have already been integrated into Gemini Live over the past year, including screen sharing and video understanding.
Hassabis also demonstrated Genie 2, an AI model that can create a 3D world from a single static image that can be explored by a human player or AI agent. In one example, Genie 2 converted a photograph of a waterfall in California into an interactive 3D environment that could be navigated like a video game.
Another breakthrough system highlighted was SIMA (Scalable Instructable Multiworld Agent), an AI agent capable of following natural-language instructions to carry out tasks in various video game settings. As Google's multimodal models become more capable and gain a better understanding of the world and its physics, they are making possible incredible new advances in robotics.
Hassabis emphasized that Google continues to double down on fundamental research, working to invent the next big breakthroughs necessary for AGI. The company is extending its Gemini 2.5 Pro model to become a "world model" that can make plans and imagine new experiences by understanding and simulating aspects of the world, similar to how the human brain functions.
When asked about ethical considerations, Hassabis stressed, "Nothing's changed about our principles. The fundamental thing about our principles has always been: we've got to thoughtfully weigh up the benefits, and they've got to substantially outweigh the risk of harm. So that's a high bar for anything that we might want to do. Of course, we've got to respect international law and human rights—that's all still in there."