Report: AI-generated fake and manipulated content about Black celebrities is spreading on YouTube
The NBC News report coincided with a wave of media scrutiny concerning AI-generated pornographic images of pop star Taylor Swift
Written by Natalie Mathes
Published
On January 30, NBC News reported that it had reviewed 12 YouTube accounts that have been using “a mix of artificial intelligence-generated and manipulated media to create fake content” about dozens of Black celebrities, including actor Denzel Washington and rapper Sean “Diddy” Combs, with some videos generating millions of views.
NBC News’ Kat Tenbarge reported that creators have apparently been using “manipulated media to depict the celebrities engaging in lewd acts and exaggerated displays of emotion in video thumbnails.” According to Tenbarge, some of those videos have “millions of views” and the “median number of combined views for each channel was 21 million.”
Angelica Nwandu, the founder and CEO of The Shade Room, a media company that NBC News described as “one of the largest Black culture news sources online,” told NBC News that AI-generated and other misinformation about Black celebrities has increasingly become a problem over the past year:
“We’ve seen these pages that pop up on YouTube or TikTok, and they will have an AI-generated picture of Rihanna crying over A$AP [Rocky] going to jail, and it’s completely fake,” Nwandu said. “Our audience will DM and say ‘Why aren’t you posting this news?’ ‘Why aren’t you covering this story?’ Because they believe these pages.”
…
“They use the jargon of the culture, the slang of the culture, because Black people trust Black media,” Nwandu said. “There has been a long-standing distrust with mainstream media in the way that our stories are told.”
According to Tenbarge, the fake content is successful in part because it adds fictional elements to “real, shocking and scandalous” news related to Black celebrities.
The videos about Black celebrities often tie back to real, shocking and scandalous events in the news. By remixing real news with false information and allegations, the videos are able to quickly gain traction by appearing to provide new information about topics that are already attracting attention.
…
YouTube announced in November that it plans to enforce a new policy requiring labels for synthetic and manipulated media in videos. That system isn’t yet in place.”
The report coincided with a wave of intense media scrutiny of AI-generated pornographic images of pop star Taylor Swift that spread across X (formerly Twitter), Instagram, and Facebook.
404 Media traced the sexually explicit AI-generated images of Swift back to 4chan and a Telegram channel that is “dedicated to making non-consensual AI generated sexual images of women.” Members of this group, they wrote, “generate similarly explicit images of dozens of female celebrities, not just Swift.” (Media Matters previously reported that users on 4chan have used Microsoft AI tools to develop and disseminate other disturbing or hateful images, including of Swift.)
The rising prevalence of AI-generated content across social media poses a number of challenges. AI researchers have observed that study participants tend to “overestimate” their own abilities to detect deepfaked audio and video. Meanwhile, developers of AI tools have admitted that, in some cases, they have difficulty discerning if a creator used their software to generate misleading media. Platforms have also struggled to consistently moderate misinformation in general, often allowing it to spread widely as users replicate and reshare the content across platforms.
On the 404 Media Podcast, Samantha Cole and Emanuel Maiberg noted that Taylor Swift’s “visibility might help raise awareness” of the issue of AI tools being weaponized against people, and that such abuse “happens to minor celebrities, it happens to normal people, it happens to them every day — we talk to these people every day — and the White House does not mobilize itself for an Instagram influencer who was being deepfaked, or just a random person.”
As YouTube veteran Hank Green explained, “These models are just going to keep getting better,” and whatever “tricks” people are “using right now to figure out if an image is created by artificial intelligence, you will not be able to use those tricks in even one year.” He added, “We are not going to closely scrutinize each one of the hundreds of images that we come across every day.”