Facebook Will Fact‑Check Content in Regional Languages To Spot Misinformation During Assembly Elections
The platform has previously come under fire for boosting content favorable to the ruling BJP during and after elections.
India is gearing up for assembly elections in five major states, which has prompted Facebook to announce strategies to curb misinformation, voter suppression, and hate speech during the voting period. The social media platform has earned a notorious reputation for its role in elections and politics, especially in India; with many commentators pointing out how its imprudent policies allowed hate speech and false information to grow unchecked, skewing the political and social narrative.
“Facebook recognizes its role in creating an informed society. Thus, it will also be reminding people to exercise their right to vote. Facebook has come up with Election Day reminders in order to give voters accurate information and encourage voters,” the company said in a blog post sharing its plans.
The 2021 election calendar is packed: first phases of staggered elections were held in West Bengal and Assam last week; Puducherry, Tamil Nadu, and Kerala go to polls on April 6; followed by remaining polls in West Bengal and Assam that will round up the assembly poll cycle by May. Facebook’s election mantra seems to be aimed at weeding out hate speech and curbing any nefarious mechanism that influences voter decisions.
For instance, it says it will remove any claims discouraging voters, such as “you will contract Covid-19” if you vote or “do not go to vote as everyone will get the disease.” Moreover, any implication of buying or selling votes in exchange for cash or kind will be explicitly removed. Other overarching strategies include public education campaigns and digital literacy training, along with more investment in technology that detects and reduces content that violates its policy and devising ways to identify new words associated with hate speech.
Facebook has also partnered with eight fact-checking organizations that will provide analysis and context to content in Bengali, Tamil, Malayalam, and Assamese. Both Facebook and Twitter have also signed a voluntary code of ethics with the Election Commission to help in ensuring free and fair polls.
Related on The Swaddle:
Facebook’s plans are ambitious in their undertaking to make the platform a constructive, misinformation-free zone, but it would be a mistake to look at these policies in silos, without factoring in the weight of a sketchy history. While referring to the 2016 U.S. election that voted Donald Trump to power, Alexis Madrigal pointed out in The Atlantic the enormous political clout the platform has: “Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of ‘viral’ hoaxes and other kinds of misinformation that could propagate through those networks.”
Facebook, and its digital neighbor Twitter, have been breeding grounds for election misinformation and voter manipulation in the very recent past; in the 2019 general elections, several reports pointed out the presence of hate speech and false information that may have influenced voter turnout and decision. Since then, the platform has been blamed for interfering in, and even destabilizing, India’s electoral democracy. In 2014, Jonathan Zittrain wrote in The New Republic of Facebook’s potential to depress voter turnout.
What seems to be the problem? One is a reasonable logistical issue: “Posts and videos in more than a dozen languages regularly flummox Facebook’s automated screening software and its human moderators, both of which are built largely around English. Many problematic posts come directly from candidates, political parties, and the media. And on WhatsApp, where messages are encrypted, the company has little visibility into what is being shared,” Vindu Goel and Sheera Frenkel write in the New York Times while recounting the deluge of false information in 2019. In April 2019, Facebook was reportedly removing more than one million abusive accounts in the run-up to the Lok Sabha elections.
But on a larger level, Facebook has been found guilty of not doing enough to curb hate speech in rising democracies such as India, and even of favoring right-wing narratives. A Wall Street Journal report last year triggered a major debate: it suggested that Facebook was going easy on hate speech by BJP members and even allowing mis- and dis-information to thrive unfettered. Several reports and articles since then concurred with the analysis: an Article 14 report pointed out troves of highly organized pages and groups on Facebook that exist solely to give majoritarian narratives a boost and spread widescale propaganda. In 2018, Home Minister Amit Shah gave insights into how the BJP’s social media machinery operates: “We are capable of delivering any message we want to the public, whether sweet or sour, true or fake,” he claimed. There is a very real threat to unfettered hate speech; especially amid growing religious friction, it carries the potential to translate into real-world violence.
Partnering with India’s Election Commission and third-party organizations to check inflammatory content during present state elections is arguably a much-needed first step. So is checking content in regional languages to reach voters in the five states. But these are steps that are imperative in the current political reality — one which extends election campaigning into the digital space and makes the line between fact and fiction woefully thin. The context and history of Facebook’s relationship with electoral politics serve two purposes: they help us in being wary of digital blindspots and also to make a strong case for demanding more robust safety nets and policies to counter election misinformation and hate speech.
“We recognize that there are certain types of content, such as hate speech, that could lead to imminent, offline harm. … To decrease the risk of problematic content going viral in these states and potentially inciting violence ahead of or during the election, we will significantly reduce the distribution of content that our proactive detection technology identifies as likely hate speech or violence and incitement,” Facebook added in its blog post.
The current policies could act as a litmus check to see if the platform does justice to the idea of ‘free and fair’ elections. CEO Mark Zuckerberg has reiterated that he doesn’t want Facebook to be an arbiter of truth; but whether or not he wants it to be, Facebook has unerringly emerged to be a non-neutral force in politics, one that could tilt, push, and disbalance the scale altogether. The golden rule seems to be: be wary of things you see, more so now than ever.
Saumya Kalia is an Associate Editor at The Swaddle. Her journalism and writing explore issues of social justice, digital sub-cultures, media ecosystem, literature, and memory as they cut across socio-cultural periods. You can reach her at @Saumya_Kalia.