The Great Social Cost of Facebook Prioritizing Profit Over Minimum Public Good
Facebook has an ethical crisis. In an interview aired yesterday, a Facebook whistleblower confirmed the company lied about taking action against hate speech and misinformation in a bid to prioritize profits over the public good. The revelation came after an investigative series over the last few weeks revealed the extent of information the Facebook leadership knew — in terms of its effect on societal polarization, misinformation, teen mental health. The clincher was it lied about taking action.
“There were conflicts of interest between what was good for the public and what was good for Facebook,” Frances Haugen, a former Facebook employee, said. “And Facebook, over and over again, chose to optimize for its interests, like making more money.” In other words, the company is aware of how its business model contributes to user harm. The silence and insouciance of giant tech firms carry immense persuasion power on a national and individual level that cause concern.
The “Facebook Files,” a collection of Wall Street Journal reports, includes leaked documents from inside Facebook that detail its accountability and inaction. “Time and again, the documents show, Facebook’s researchers have identified the platform’s ill effects. Time and again, despite congressional hearings, its pledges, and numerous media exposés, the company didn’t fix them,” WSJ noted. For instance, a report showed the top leadership was aware of Instagram’s harmful impact on mental health among young children. Yet, the company continued work on a kids’ version of the app. It was only after the reports were released the company decided to put a pin on the plans.
The Facebook Files reveal the systemic harms big tech like Facebook have on society. So far, Facebook has denied any direct causation to polarization. “… to suggest we encourage bad content and do nothing is just not true,” spokesman Andy Stone told the Wall Street Journal.
And yet, it is “one of the greatest exposés of Big Tech yet produced,” as a blog noted. The company knows “if they change the algorithm to be safer, people will spend less time on the site, they’ll click on fewer ads, they’ll make less money,” Haugen explained. The formula is unsurprising. For every big tech, the algorithm is supreme. Hate speech, misinformation, sensational content fuel people’s engagement with content — furthering big tech’s platforms. Everyone knows how the algorithm works and what it prefers, but people express hesitance and fail to take accountability for its social cost.
Related on The Swaddle:
The reports confirm what we knew all along: the company prioritizes profits over the public good.
Left unchanged, the impact is unraveling at a dangerous pace in societies like India. Some sobering facts: most Indian users rely on social media platforms like Facebook to consume their news. India was the world’s biggest source of Covid19 misinformation, a trend platforms like Facebook amplified. The platform’s algorithm interfered with Indian elections; in the 2019 general elections, several reports pointed out the presence of hate speech and false information that may have influenced voter turnout and decisions. Since then, experts have called out the platform, even destabilizing India’s electoral democracy.
Elections, voting, and democracy are the bigger picture. On a micro-scale, the impact is violent. The algorithms favor polarizing content and fail to check misinformation — activities that spill into real-life bloodshed. An Article14 investigation found Hindutva groups posted incendiary content on Facebook days leading up to the Delhi riots in February 2020, a series of events culminating with the death of hundreds of people and displaced Muslim communities.
In July, rightfully, the Supreme Court noted that Facebook’s role must not be dismissed while investigating the Delhi riots case. Facebook, the bench observed, cannot lose sight of how it has simultaneously become a “platform for disruptive messages, voices, and ideologies.” So even if the platform insists on not being an arbiter of truth or interfering with free speech, the reality is vastly different.
Related on The Swaddle:
Hate speech travels at light speed on social media. This is a fatal trend in the context of growing polarization. Communal posts about the farmers’ protest, for instance, frame the leaders as “anti-nation,” and Sikh separatists end up demonizing the people and the cause. The result is the unfortunate murder of farmers at Lakhimpur Kheri yesterday, where a car ran over the protesting members.
The idea of technology exceeding governance is haunting. In The Atlantic, journalist Adrienne LaFrance noted how Facebook has all the making of veritable nationhood: “people, a philosophy of governance, currency (Facebook is developing its own currency), and land (via the metaverse).” It’s then sobering the philosophy for “this authoritarian state” leans into for-profit systems, which will affect any functioning democracy.
Will this change how people interact with the platform? According to Imran Ahmed, head of the Center for Countering Digital Hate, transparency about algorithms is critical. “If users knew for sure what the algorithm was doing, that there is transparency, and that governments, regulators and watchdogs can independently confirm whether Facebook’s algorithms are pushing misinformation, social media firms would find it impossible to carry on doing business as they are,” he told The Guardian.
Transparency is key here. Before a company can even decide to be fair, it must determine what type of fairness it cares about. According to a report by philosophers at Northeastern University, step one is identifying the scope of justice and then measuring that value.
For now, the social cost of the social network is inching towards a moral bankruptcy previously unseen and unfathomed.