YouTube, owned by Google, has taken down over 1,000 scam ad videos featuring deepfake technology, where AI is used to make it appear as if celebrities like Taylor Swift, Steve Harvey, and Joe Rogan are promoting Medicare scams. These videos, viewed nearly 200 million times, were removed after a 404 Media investigation exposed the advertising ring behind them.
YouTube’s Response
Acknowledging the issue, YouTube mentioned its significant investment to combat AI-generated celebrity scam ads. The platform is actively working to prevent the spread of such deepfake content and takes the matter seriously.
Scope of the Problem
The deepfake problem extends beyond YouTube, as non-consensual deepfake porn featuring Taylor Swift recently went viral on another platform. The explicit content gained over 45 million views and 24,000 reposts before being taken down after approximately 17 hours. A report by 404 Media suggests that the images might have originated in a Telegram group where users share AI-generated explicit images of women.
Cybersecurity Insight
According to cybersecurity firm Deeptrace, approximately 96% of deepfakes are pornographic, with a predominant focus on portraying women. This underlines the broader challenge in tackling inappropriate use of AI-generated content online.
YouTube is aware of the misuse of its platform for celebrity deepfake ads and is actively addressing the issue.
Ongoing Concerns
The incident sheds light on the ongoing challenges of combating deepfake content across various online platforms. As technology evolves, platforms continue to adapt and invest resources to stay ahead in the fight against deceptive and harmful practices.
Inputs from IANS
ALSO READ | Facebook and Instagram safety upgrade: Meta introduces stricter messaging settings
ALSO READ | Microsoft trims jobs: 1,900 affected after Activision Blizzard acquisition | Details
GIPHY App Key not set. Please check settings