In December 2016, Facebook, in an attempt to curb the spread of fake news in the wake of the U.S. presidential election, announced it would start raising red flags on fake news stories — labeling them with warnings that read “disputed by 3rd Party Fact-Checkers.” This was supposed to let users know the news they were consuming was not accurate — a move that has since backfired, a Massachusetts Institute of Technology study has found.
Putting warning labels on some fake news stories makes social media users more likely to believe other stories, which have not been flagged but could still be fake, the MIT study found. “Putting a warning on some content is going to make you think, to some extent, that all of the other content without the warning might have been checked and verified,” David Rand, co-author of the study and professor at MIT Sloan School of Management, said in a statement. “There’s no way the fact-checkers can keep up with the stream of misinformation, so even if the warnings do really reduce belief in the tagged stories, you still have a problem ….”
Researchers conducted experiments with 6,739 U.S. residents, exposing them to a mixture of true and false news stories and then asking them how likely they were to share the stories on Facebook. First, Rand stresses the participants did not reject the idea of sharing news stories based on their ideology; a participant was no more likely to share a news story if the content of the piece matched their political beliefs than they were to share one that challenged their beliefs.
Related on The Swaddle:
91% of 15-Year-Olds Can’t Separate Fact From Opinion
The participants were divided into three groups: the first group was not exposed to any warning labels on false stories; the second was exposed to warning labels (“FALSE”) on some false stories; the third was exposed to “FALSE” and “TRUE” labels on some fake and accurate news stories respectively. Researchers observed a dip in the sharing of false stories that they attributed to the presence of a warning label; when no labels were attached, participants considered sharing 29.8% of stories, and when the “FALSE” label was attached, they shared only 16.1% of stories. But researchers also witnessed what Rand calls the “implied truth effect” — among the groups exposed to warning labels, researchers observed participants shared more stories that were false but had no warning labels.
In this case, it means users who weren’t explicitly told a story was fake assumed that everything else they came across was accurate. Facebook users were perfectly rational in believing the implied truth effect, Rand says, which makes these warning labels “problematic.”
Rand proposes a solution: in the third group, where participants were exposed to both “FALSE” and “TRUE” labels, they were less likely to share fake news — 13.7% of the stories labeled as “FALSE,” and 26.9% of the stories not labeled.
“If, in addition to putting warnings on things fact-checkers find to be false, you also put verification panels on things fact-checkers find to be true, then that solves the problem because there’s no longer any ambiguity,” Rand says. “If you see a story without a label, you know it simply hasn’t been checked.”