menu
close

AI Deepfakes Flood Social Media with Bogus Health Scams

A troubling wave of AI-generated videos is proliferating across platforms like TikTok, promoting unproven sexual treatments and supplements through sophisticated deepfake technology. These videos, which often feature AI-generated personas or celebrity impersonations, are part of what researchers describe as an 'AI dystopia' designed to manipulate consumers into purchasing questionable products. The Federal Trade Commission has responded with proposed regulations to combat this growing threat to consumer safety and online trust.
AI Deepfakes Flood Social Media with Bogus Health Scams

Social media platforms are being inundated with AI-generated videos peddling dubious health products, particularly unproven sexual treatments, creating what experts call a deception-filled online environment designed to manipulate vulnerable consumers.

Researchers at Cornell Tech have documented a surge of 'AI doctor' avatars on TikTok promoting questionable sexual remedies, with some videos garnering millions of views. These videos typically feature muscular shirtless men or AI-generated personas using euphemistic language to evade content moderation, directing users to purchase supplements with exaggerated or entirely fabricated claims.

More alarmingly, rapidly evolving AI tools have enabled the creation of deepfakes impersonating celebrities and public figures. Resemble AI, a Bay Area firm specializing in deepfake detection, has identified numerous videos where public figures like Anthony Fauci and actors Robert De Niro and Amanda Seyfried appear to endorse unproven treatments. These manipulated videos are often created by modifying existing content with AI-generated voices and sophisticated lip-syncing technology.

'As seen in this example, misleading AI-generated content is being used to market supplements with exaggerated or unverified claims, potentially putting consumers' health at risk,' said Zohaib Ahmed, Resemble AI's chief executive, highlighting the consumer safety concerns these scams present.

The Federal Trade Commission has responded to this growing threat by proposing new regulations specifically targeting AI-enabled impersonation fraud. In February 2024, the FTC published a supplemental notice of proposed rulemaking that would prohibit the impersonation of individuals in commerce and potentially extend liability to AI platforms that knowingly provide tools used in such scams.

'Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,' said FTC Chair Lina M. Khan.

The rapid pace of generating short-form AI videos presents unique challenges for content moderation, as even when platforms remove questionable content, near-identical versions quickly reappear. Researchers say this creates a 'whack-a-mole' situation requiring more sophisticated detection tools and novel regulatory approaches to effectively combat the growing threat of AI-powered scams.

Source: Nbcrightnow

Latest News