In what security experts are calling a watershed moment for AI-powered cybersecurity, Google announced on July 15, 2025, that its 'Big Sleep' AI agent successfully prevented the exploitation of a critical SQLite vulnerability that was known only to threat actors.
The vulnerability, tracked as CVE-2025-6965 with a CVSS score of 7.2, is a memory corruption flaw affecting all SQLite versions prior to 3.50.2. According to SQLite project maintainers, "An attacker who can inject arbitrary SQL statements into an application might be able to cause an integer overflow resulting in read off the end of an array."
What makes this case remarkable is how Google's AI system not only detected the vulnerability but also predicted its imminent exploitation. "Through the combination of threat intelligence and Big Sleep, Google was able to actually predict that a vulnerability was imminently going to be used and we were able to cut it off beforehand," said Kent Walker, President of Global Affairs at Google and Alphabet.
Big Sleep represents the evolution of Google's AI security capabilities, having been developed as a collaboration between Google DeepMind and Google Project Zero. The system was first announced in 2024 as Project Naptime before evolving into its current form. In November 2024, it discovered its first real-world vulnerability, but this marks the first time it has actively prevented an exploit attempt.
The implications extend beyond Google's own security infrastructure. The company is now deploying Big Sleep to help improve the security of widely used open-source projects, potentially transforming how vulnerabilities are detected and mitigated across the internet. Security researchers note this represents a shift from reactive to proactive cybersecurity defense, where AI systems can identify threats before they materialize.
"These cybersecurity agents are a game changer, freeing up security teams to focus on high-complexity threats, dramatically scaling their impact and reach," Google stated in its announcement. The company has also published a white paper outlining its approach to building AI agents that operate with human oversight while safeguarding privacy and mitigating potential risks.