menu
close

Italy Targets DeepSeek in AI Hallucination Probe

Italy's antitrust watchdog AGCM has launched an investigation into Chinese AI startup DeepSeek for allegedly failing to adequately warn users about AI hallucination risks. The regulator claims DeepSeek did not provide sufficiently clear warnings about the possibility of its AI generating inaccurate or misleading information. This follows a February action where Italy's data protection authority blocked DeepSeek's chatbot over privacy concerns.
Italy Targets DeepSeek in AI Hallucination Probe

Italy's antitrust regulator AGCM announced on Monday that it has opened a formal investigation into Chinese artificial intelligence startup DeepSeek, marking a significant regulatory action against AI hallucinations.

The investigation centers on allegations that DeepSeek failed to provide users with "sufficiently clear, immediate and intelligible" warnings about the risk of AI hallucinations in its content. AGCM defined these hallucinations as "situations in which, in response to a given input entered by a user, the AI model generates one or more outputs containing inaccurate, misleading or invented information."

This probe represents one of the first major regulatory actions specifically targeting AI hallucinations, a growing concern as generative AI systems become more widespread. The investigation comes amid increasing global scrutiny of AI transparency and misinformation risks, with regulators worldwide developing frameworks to address these challenges.

DeepSeek, founded in late 2023 by Chinese hedge fund manager Liang Wenfeng, has rapidly emerged as a significant player in the global AI landscape. The company gained international attention earlier this year when it released AI models that reportedly matched the capabilities of leading Western competitors at a fraction of the cost, triggering market volatility in the tech sector.

This isn't DeepSeek's first encounter with Italian regulators. In February 2025, Italy's data protection authority ordered the company to block access to its chatbot after DeepSeek failed to address privacy policy concerns. At that time, DeepSeek reportedly claimed it did not operate in Italy and that European regulations did not apply to its operations.

As the EU AI Act begins phased implementation throughout 2025, with transparency requirements for general-purpose AI systems taking effect on August 2, this case could establish important precedents for how European regulators will approach AI hallucination issues under the new regulatory framework.

Source: Reuters

Latest News