The Oversight Board is right, Meta's manipulated media policy is woefully insufficient
Oversight Board Co-Chair Michael McConnell: “Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public”
Written by Natalie Mathes
Published
On February 5, the Oversight Board upheld Meta’s decision not to remove a video of President Joe Biden that was misleadingly edited to suggest that he inappropriately touched his granddaughter, agreeing that the video didn’t violate the company's existing rules against manipulated media. The board also urged the company to expand these rules to address media that has been deceptively edited to “show people doing things they didn’t do,” citing concerns about those who seek to “mislead the public” in the 2024 elections.
The board wrote in its decision:
… the Board is concerned about the Manipulated Media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent (for example, to electoral processes). Meta should reconsider this policy quickly, given the number of elections in 2024.
Meta’s existing manipulated media policy requires political campaigns to disclose their use of artificial intelligence in advertisements and prohibits “videos that have been edited or synthesized” and “would likely mislead an average person to believe” that “a subject of the video said words that they did not say” and “is the product of artificial intelligence or machine learning.”
The video of Biden and his granddaughter, which does not involve misrepresented speech, demonstrates the inadequacies of these policies. Per The Washington Post:
Because the video doesn’t alter Biden’s speech, the Oversight Board agreed it didn’t violate Meta’s rules. The board also said it was obvious the video had been edited.
But the video raises issues with Meta’s existing policies, which the Oversight Board said were focused on how content is created, rather than its potential harms — including voter suppression. It called on Meta to extend its manipulated media policy to address altered audio as well as videos that show people doing things they didn’t do.
The Oversight Board also recommended that the company not remove manipulated media if it doesn’t violate any other rules but attach a label alerting users that the content has been altered.
In its October response to the board’s request for public comment on the case, Media Matters highlighted numerous instances in which Meta failed to adequately enforce its manipulated media policies during the 2020 and 2022 election cycles.
Those instances included a digitally altered video purporting to show Biden appearing at a dinner with a man in blackface, a video altered to make then-House Speaker Nancy Pelosi (D-CA) look and sound as though she was drunk and slurring her words during a press conference, and a manipulated, misleading video that supposedly shows Biden fumbling as he presented a veteran with a Medal of Honor. Each of these circulated widely across Facebook and were either inconsistently labeled or not labeled at all.
The spread of deceptive and manipulated media remains a challenge for Meta. In December 2023, the company profited from advertisements that used “behind-the-scenes footage from a short film shot in Lebanon” to spread the debunked conspiracy theory that Palestinians injured during the Israeli military’s assault on Gaza are “crisis actors” who have faked their injuries. In January, NBC News reported that “explicit, AI-generated Taylor Swift images” had proliferated on Instagram and Facebook. According to the report, a search for “Taylor Swift AI” on Instagram and Facebook returned “sexually suggestive and explicit deepfakes of Swift” days after the images first surfaced on X. In February, Media Matters reported that Facebook and Instagram users have been promoting a misogynistic campaign called “#dignifAI,” initially launched by 4chan users. The campaign involves manipulating images of women with AI to make them appear more modestly dressed, and then posting the original and manipulated images side-by-side.
On February 6, Meta global affairs president Nick Clegg announced in a blog post that “in the coming months” the company would begin to label “images that users post to Facebook, Instagram and Threads” when it detects “industry standard indicators that they are AI-generated.”
Meta already applies “Imagined with AI” labels to images on its platforms that are generated using the company’s own AI feature, but Meta is planning to label images generated by AI tools from other companies “in the coming months.”
Clegg also explained that Meta “can’t yet detect” video or audio content generated using AI tools, but that it’s “adding a feature for people to disclose when they share AI-generated video or audio” so the company can then “add a label to it.”
In the meantime, the Oversight Board is correct to be “concerned about [Meta’s] Manipulated Media policy in its current form ... given the number of elections in 2024.”
Given the company’s inadequate policies against election misinformation, and its history of failing to consistently enforce its various policies, that concern is warranted. Right-wing figures and social media users continue to spread and amplify election misinformation on Meta’s platforms — and several others. Some of those figures are using manipulated media to spread such misinformation. It’s crucial that the company heed the board’s call to quickly expand its manipulated media policy.