menu
close

OpenAI Detects Rising Chinese Misuse of ChatGPT for Covert Ops

OpenAI reported on June 5, 2025, that an increasing number of Chinese groups are exploiting its AI technology for covert operations, including generating polarizing social media content and supporting cyber activities. While these operations have grown in scope and tactics, they remain relatively small-scale with limited audience reach. The findings highlight ongoing concerns about generative AI's potential for misuse in creating human-like text, imagery, and audio for malicious purposes.
OpenAI Detects Rising Chinese Misuse of ChatGPT for Covert Ops

OpenAI has identified a growing trend of Chinese groups misusing its artificial intelligence technology for covert operations, according to a report released on June 5. The San Francisco-based company, recently valued at $300 billion following a record $40 billion funding round, has taken action by banning multiple accounts linked to these activities.

In one notable case, dubbed "Sneer Review" by OpenAI investigators, operators generated social media posts on politically sensitive topics relevant to China. These included criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some posts also criticized U.S. President Donald Trump's tariff policies. The operation was sophisticated enough to create both posts and comments to fabricate the appearance of organic engagement.

A second operation involved China-linked threat actors using ChatGPT to support various phases of their cyber operations. This included open-source research, script modification, troubleshooting system configurations, and developing tools for password brute forcing and social media automation. Particularly concerning was the discovery that these actors used OpenAI's tools to create internal documents, including performance reviews detailing their activities.

A third example revealed a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse. This approach appears designed to exploit existing political divisions rather than promote a specific ideological stance. Ben Nimmo, principal investigator on OpenAI's intelligence team, noted that while these operations demonstrate "a growing range of covert operations using a growing range of tactics," they were generally disrupted in early stages before reaching large audiences.

OpenAI's report also mentioned disrupting covert influence operations from other countries, including Russia and Iran, as well as various scams linked to Cambodia and North Korea. The company regularly monitors and reports on malicious activity on its platform as part of its commitment to responsible AI development.

China's foreign ministry has not responded to requests for comment on OpenAI's findings.

Source:

Latest News