Twitter’s enforcement inconsistency undermines its efforts at policy reforms
After struggling to wipe a doctored video of Nancy Pelosi from the site, Twitter needs to rethink its approach to misinformation
Written by Parker Molloy
Published
Every so often, Twitter rolls out a policy that sounds genuinely good, winning praise from the platform’s fans and critics alike. Its decision to restrict the spread of false and potentially harmful COVID-19 information, to crack down on disinformation about voting, to label manipulated media, and to put a ban on political advertising ahead of the 2020 election were common sense policy decisions cheered by many.
But the problem with Twitter’s policies has always been that the company seems unable or unwilling to actually enforce them with any consistency.
Over the weekend, a video showing House Speaker Nancy Pelosi (D-CA) slurring her words during a press conference spread across social media. The video, posted on July 30 by a Facebook user named Will Allen, was captioned, “This is unbelievable, she is blowed out of her mind, I bet this gets taken down!” The video had been slowed down and altered to make it look and sound as though Pelosi was drunk. The original unedited video — from a May press conference — debunks the implication of the edited version. (And as it so happens, Pelosi doesn’t even drink.) A similarly altered video of Pelosi went viral in May 2019.
After CNN reached out to Twitter and other platforms about the latest doctored video, they removed copies of it. Case closed, right? Not quite.
Though Twitter removed the clip in at least one instance, others remain live on the platform, highlighting the confusing and inconsistent nature of its content moderation.
One version of the video on the platform had Twitter’s “manipulated media” tag attached to it, which redirected to a page containing an explanation of what had been manipulated about the video along with links to fact-checking articles. Other versions, which were shared by multiple users associated with the QAnon conspiracy theory, remain on Twitter, some with and some without the manipulated-media tag.
Twitter’s manipulated-media policy was announced in February, but it remains fairly opaque. The blog post containing the announcement says that moderators take three things into account when determining the correct course of action for an account sharing edited and misleading videos: Has it been edited, was it shared in a misleading way, and is it likely to impact public safety or cause harm? Even then, trying to navigate what is supposed to actually happen when an account violates the policy remains unnecessarily labyrinthine, as the below chart from Twitter’s blog post demonstrates.
For instance, if a manipulated video isn’t shared in a deceptive manner but is likely to impact public safety, Twitter’s policy states that the content is “likely” to be labeled, but “may” be removed. If a video is edited and shared in a deceptive manner, but doesn’t threaten public safety, it’s “likely” to be labeled, but not removed. If the video is edited, shared in a misleading manner, and threatens public safety, it’s “very likely” to be removed. Twitter’s reliance on words like “may,” “likely,” and “very likely” in its explanation of how it will respond to various examples of rule breaking is confusing for users and likely for a fair number of the company’s own employees, as well.
Also, Twitter’s rollout of a COVID-19 misinformation policy unintentionally targeted an unexpected group: journalists.
In a March blog post about Twitter’s “continuity strategy during COVID-19” written by Twitter’s legal, policy and trust and safety lead Vijaya Gadde and customers lead Matt Derella, the company announced plans to increase its use of automation and machine learning in content moderation.
“While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,” added Gadde and Derella, setting reasonable expectations for users.
“For content that requires additional context, such as misleading information around COVID-19, our teams will continue to review those reports manually,” the blog post went on to read, listing examples of what constitutes a violation of Twitter’s policy on COVID-19 misinformation.
But enforcement has been messy. After President Donald Trump falsely claimed during a Fox News interview that children were “almost immune” to the novel coronavirus that causes COVID-19, Trump’s campaign account tweeted out the video featuring his statement. Twitter flagged the clip as being in violation of its rules on COVID-19 misinformation (no, children aren’t “almost immune”; that is factually incorrect), and put a lock on the account until it deleted the tweet with the video.
The problem came when journalists, fact-checkers, and others monitoring conservative media — including Media Matters’ own Bobby Lewis and Lis Power — posted the clip to share that the president had once again spread dangerous misinformation on national TV. Twitter locked their accounts as well for “violating” its policy.
But even that misguided enforcement was spotty. In a separate interview with Fox Business’ Lou Dobbs, Trump claimed that young people are “virtually immune” to COVID-19, a near identical statement to the one that Twitter pulled. As of now, a clip of that moment from Media Matters’ Jason Campbell remains online. It’s unclear why a clip of Trump saying that children are “almost immune” violated Twitter’s COVID-19 policies while a clip of Trump saying that children are “virtually immune” made the cut.
At a certain point, Twitter needs to make enforcement of its policies a priority over simply rolling out new ones that leave well-meaning people unsure of whether they’re in violation of the platform’s latest rules. If tweeting videos that aired on widely watched TV shows along with fact checks violates policy, that’s something journalists and media watchdogs need to know. Otherwise, this scattershot enforcement will only serve to embolden bad actors.
Fact-checkers need to be able to show source material when debunking misinformation, and it’s concerning that Twitter seems to no longer allow this -- at least in certain cases.
It’s hard enough to fight back against misinformation when you’re able to provide the actual claim that is being fact-checked. If you’re unable to show the primary source of the claim, you’re forced to ask audiences to take you at your word. When such sources are available, they should absolutely be included in fact checks.
On the other hand, it is reasonable to consider whether including that primary source content has the potential to further misinform audiences by expanding its reach. Decisions about how to handle false information have to be nuanced. Unfortunately, it seems as though Twitter hasn’t taken that need for nuance into account in its COVID-19 misinformation policies.
You can’t fight false information from a public figure as influential as the president by simply deleting it. His message will still make its way to audiences regardless, whether through Twitter or other channels. Instead of reducing the amount of misinformation in the world, this policy as it’s currently being enforced increases it by silencing people trying to correct the record. Twitter would benefit from conversations with journalists, fact-checkers, and media watchdog groups about how to best fight misinformation. In the meantime, spotty enforcement of confusing policies makes Twitter an unnecessarily chaotic platform.