On Thursday, Facebook announced its new policy to curb the spread of false information about the disease. In a blog post, it said, it would begin showing notifications to users who may have interacted with “harmful” posts about Covid19.
The new policy is likely to be rolled out in the coming weeks and applies only to misinformation that can lead to “imminent physical harm” such as claims about cures or statements like social distancing is not effective. So far, Facebook has been deleting such posts on the platform.
According to the new policy, users who have liked, shared, commented or reacted with an emoji to false information about Covid19, before they were deleted, will be directed to a “myth busters” page maintained by the World Health Organization (WHO). The new policy does not entail Facebook tailoring a specific message to the misinformation the user has seen but showing them a blanket message instead, which will read, “Help friends and family avoid false information about COVID-19,” above a link to the WHO website.
Related on The Swaddle:
Debunking the Myths About Covid19
“We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again,” said Guy Rosen, Facebook’s vice-president for Integrity, in the blog post.
Currently, Facebook doesn’t delete misinformation other than that related to cures, treatments, the availability of essential services or the location and severity of the outbreak on Covid19. Other kinds of information, such as conspiracy theories, doesn’t get deleted but Facebook depends on its third-party fact-checking system to tell them whether posts are true or false.
“If a piece of content contains harmful misinformation that could lead to imminent physical harm, then we’ll take it down. We’ve taken down hundreds of thousands of pieces of misinformation related to Covid-19, including theories like drinking bleach cures the virus or that physical distancing is ineffective at preventing the disease from spreading. For other misinformation, once it is rated false by fact-checkers, we reduce its distribution, apply warning labels with more context and find duplicates,” said Facebook’s CEO Mark Zuckerberg in an online post.
The move comes soon after a study by an online activist group, Avaaz, found Facebook was failing in fixing false posts, especially when they were in different languages. Avaaz also highlighted delays by Facebook in using warning labels for coronavirus misinformation.
“We’ve been calling on Facebook to take this step for three years now,” said Fadi Quran, Avaaz’s campaign director to The Guardian. “It’s a courageous step by Facebook. At the same time, it’s not enough.”
According to Quran and his group, Facebook should take steps to make its notifications on the misinformation clearer that the user may have once have seen. In addition to that, the group wants these notifications be available to any user who saw the misinformation in their news feed irrespective of whether they engaged with the post by liking, sharing or reacting on it.
“We think that correcting the record retroactively … will make people more resilient to misinformation in the future, and it will disincentivize malicious users,” added Quran.