The rapid integration of artificial intelligence into critical infrastructure has created an alarming security landscape, according to cybersecurity experts featured in the latest RISKS Forum Digest published on May 17, 2025.
The World Economic Forum's Global Cybersecurity Outlook 2025 highlights that while 66% of organizations view AI as this year's biggest cybersecurity game-changer, only 37% have implemented safeguards to assess AI tools before deployment. This disconnect between awareness and action has created significant vulnerabilities across industries.
"Organizations and systems that do not keep pace with AI-enabled threats risk becoming points of further fragility within supply chains, due to their increased potential exposure to vulnerabilities and subsequent exploitation," warned a spokesperson from the UK's National Cyber Security Centre (NCSC) in their latest report. The NCSC predicts that by 2027, AI-empowered attackers will further reduce the time between vulnerability discovery and exploitation, which has already shortened to mere days.
Cybersecurity professionals are particularly concerned about prompt injection attacks against large language models (LLMs). In a recent penetration test cited by security researchers, a candle shop's AI chatbot was compromised through prompt engineering, creating security, safety, and business risks. The attack allowed extraction of system data and manipulation of the chatbot's responses, demonstrating how seemingly innocuous AI implementations can become serious security liabilities.
Supply chain vulnerabilities represent another major concern, with 54% of large organizations identifying them as the biggest barrier to achieving cyber resilience. The increasing complexity of supply chains, coupled with limited visibility into suppliers' security practices, has created an environment where AI systems can be compromised through third-party components.
The emergence of agentic AI—systems that can make decisions and execute complex tasks autonomously—is expected to transform the threat landscape in 2025. "Previously, we were focused on AI assistants that could respond to prompts from users. Now we're looking at agentic AI tools that can make decisions and carry out complicated tasks independently," explained Hao Yang, VP of artificial intelligence at Cisco-owned Splunk.
Experts recommend organizations implement formal AI security policies, conduct threat modeling before deployment, reduce attack surfaces systematically, and ensure vendors have active security improvement programs. Additionally, continuous training for security teams is essential as AI-driven attacks evolve beyond traditional defense mechanisms.
As one security researcher noted in the RISKS Forum, "The shift isn't just about defending against AI-powered attacks—it's about recognizing that our AI systems themselves have become prime targets."