TikTok trolls are creating deepfakes and deceptively editing real users’ videos to promote “transracialism”
TikTok’s community guidelines prohibit deepfakes, inauthentic behavior, and hate speech
Written by Olivia Little & Abbie Richards
Published
A network of inauthentic TikTok accounts claiming to be “transracial” are using stolen videos of young creators and deepfakes -- deceptive videos created or altered with artificial intelligence or machine learning -- in order to promote transphobic and racist sentiment on the platform. TikTok’s community guidelines prohibit “synthetic or manipulated content that misleads users,” as well as hate speech that dehumanizes a group on the basis of gender identity (examples of so-called “transracialism” are often implicit attacks against trans people).
“Transracialism” is the bogus idea that a person can transition to a different race, often used as a right-wing talking point to invalidate the identities of transgender people. This racist narrative reemerged in July 2021, after English internet personality and singer Oli London announced that they are “transracial,” identifying as Korean. Many of the deceptive transracial TikTok accounts have profile pictures of London or Rachel Dolezal, a white American woman who publicly self-identifies as Black. Others use the “transracial flag” as a profile picture or as imagery in videos.
These TikTok users blatantly mock trans identities as invalid and conflate trans people with “transracial” troll accounts -- even using the transgender pride flag in a profile picture of Dolezal.
This troll campaign is particularly malicious, as the “transracial” accounts regularly appear to steal and deceptively edit videos from real TikTok creators to make it look as if they are the ones claiming to be “transracial.” This can lead to the harassment or doxxing of innocent users. One user claimed that their video had been stolen, writing, “PLEASE BLOCK AND REPORT THIS ACCOUNT THEY STOLE MY VIDEO TO TRY AND IMPERSONATE ME.”
While the social media giant has removed several of these accounts after they’ve gone viral for impersonating other users to promote the bogus idea of transracialism, it has done little to prevent this behavior from expanding or to protect impersonated creators. A quick “transracial” search on TikTok returns these troll accounts as some of the first results, both on the “videos” tab and the “users” tab.
The troll accounts have also uploaded deepfake “transracial testimony” in videos featuring AI-generated faces, with the person supposedly claiming to identify as a different race. In one video which garnered over 65,000 views before removal, a seemingly white man speaks with an Indian accent and says, “I am trapped in a white devil body, assigned this body at birth without my consent. I should be in India with my other Indian people. I am Indian. I identify as Indian and I wish people would be respectful of this.”
A simple reverse image search revealed that the speaker was generated with artificial intelligence technology. The same entity appears in a video shared by the Lonsdale Institute in which he says, “Hi. This is Jack, I am an artificial intelligence teaching assistant.” The same AI also matched a number of commercials on YouTube.
While the account posting this video, @transraciallovee, has been removed, the videos have since been uploaded on a new account with the username “transracialgirl.”
In total, Media Matters found four deepfake transracial videos, which appeared to contain a white person claiming to identify as Indian; a German man claiming to identify as Swedish; a Black woman claiming to identify as white; and a white man claiming to identify as Korean. In the last video, the most liked comment declares: “This is legendary bait.”
TikTok’s negligence has led to the harassment of innocent users, as well as the rise of inauthentic videos perpetuating a racist and transphobic narrative. While discussions surrounding deepfake technology frequently focus on its potential to wreak geopolitical havoc, its ability to perpetuate bigotry is under-considered. It’s crucial to understand that while AI could be used as a powerful media technology, it is also a harmful tool in the hands of far-right trolls -- and social media platforms like TikTok must protect their users from such hateful and bigoted smear campaigns.