menu
close

OpenAI Thwarts 10 State-Backed AI Misuse Campaigns

OpenAI's June 2025 report details how the company disrupted 10 malicious campaigns that exploited ChatGPT for employment scams, influence operations, and spam activities in early 2025. State-sponsored actors from China, Russia, and Iran were behind many of these operations, using AI tools to enhance scams, cyber intrusions, and global influence campaigns. While generative AI hasn't created new threat categories, it has significantly lowered technical barriers for malicious actors and increased the efficiency of coordinated attacks.
OpenAI Thwarts 10 State-Backed AI Misuse Campaigns

OpenAI has released a comprehensive report titled "Disrupting malicious uses of AI: June 2025," revealing how the company identified and neutralized 10 malicious campaigns that exploited its AI systems during the first months of 2025.

The report details how state-sponsored actors from six countries—China, Russia, North Korea, Iran, Cambodia, and the Philippines—have been leveraging ChatGPT and other AI tools to conduct employment scams, influence operations, and spam campaigns. Four of these campaigns originated from China, focusing on social engineering, covert influence operations, and cyber threats.

In one campaign dubbed "Sneer Review," Chinese actors flooded social platforms with critical comments targeting a Taiwanese board game that included themes of resistance against the Chinese Communist Party. Another operation, "Helgoland Bite," involved Russian actors generating German-language content criticizing the US and NATO while attempting to influence the German 2025 election. North Korean actors were observed using AI to mass-produce fake resumes for remote tech roles, aiming to gain control over corporate devices issued during onboarding.

OpenAI's security teams employed AI as a force multiplier for their investigative efforts, enabling them to detect, disrupt, and expose abusive activities including social engineering, cyber espionage, and deceptive employment schemes. The company's detection systems flagged unusual behavior in all campaigns, leading to account terminations and intelligence sharing with partner platforms.

"We believe that sharing and transparency foster greater awareness and preparedness among all stakeholders, leading to stronger collective defense against ever-evolving adversaries," OpenAI stated in its report. While generative AI hasn't created entirely new categories of threats, it has significantly lowered technical barriers for bad actors and increased the efficiency of coordinated attacks.

Security experts emphasize that organizations must stay vigilant about how adversaries are adopting large language models in their operations and engage with real-time intelligence shared by companies like OpenAI, Google, Meta, and Anthropic to build stronger collective defenses against these evolving threats.

Source: Techrepublic

Latest News