Why Aren't Videos, Memes Mocking Palestinian Deaths a Violation of Social Media Terms?
The videos employ humour to disguise hate speech -- and evade content moderation policies.
Several reports have shown that Palestinian voices – and pro-Palestinian perspectives – are being censored online. Last week, Meta removed @eye.on.palestine – an account with more than 6 million followers that had been consistently presenting the prevailing ground reality in Gaza. Many individuals and activists have pointed out how Instagram’s algorithm, too, is restricting the reach of any content pertaining to Gaza. But even as the Palestinians’ lived reality is being censored, videos mocking their suffering have emerged as a new trend online.
Reports showed Israeli civilians flaunting their comfortable living conditions and poking fun at the suffering of Palestinians, and volunteer soldiers posting dance videos on TikTok – while the humanitarian crisis intensifies around them. Many Indian social media accounts, too, have begun participating in this trend. In contrast, videos from Gaza show overworked medical personnel working to save lives, and images of Palestinian parents holding their lifeless children. It is a matter of confirmed fact that Palestinians are undergoing shortages of food, electricity, internet, shelter, and even water. But many Israeli parents are dressing their children up for social media, directing them to act in a manner implying that Palestinians are faking these claims.
Some of these videos have gone viral: on TikTok and Instagram, for instance, users with several thousand followers have posted videos emphasizing their access to electricity and water – by repeatedly switching their electrical appliances on and off and by using excessive water to clean their surroundings – in a bid to highlight how they’re “better off” than Palestinians. Some are covering their faces with powder – pretending it's debris – and blackening their teeth to “play” the part of Palestinians caught in conflict. Yet another influencer cradled a fruit with a sad face drawn on it, mocking Palestinian women who are running for their lives carrying their infants along.
“This is an example of the anti-Palestinian racist conspiracy theory of ‘pallywood,’ where some zionists argue that Palestinians manipulate media through ‘faking suffering' – unsurprisingly, it's also very similar to anti-Semitic tropes too. Very telling about Israel supporters,” an X (formerly Twitter) user explained. These videos are horrifyingly reminiscent of Nazi posters that were used as propaganda to influence the masses against the Jewish population in Germany.
That reported, verified facts are framed as disingenuous or fake is, by definition, misinformation. Yet, the videos continue to go viral. Meta’s terms state: “We define misinformation as content with a claim that is determined to be false by an authoritative third party.” TikTok’s definition of the same is “content that is inaccurate or false.” There is a case to be made for the fact that the memes and videos implying that Palestinians are faking their plight constitute a violation of TikTok’s terms. As for Meta, it’s unclear who constitutes an “authoritative third party” – but there is evidence to suggest that it’s usually government narratives. “Governments around the world have chosen authoritarianism, and platforms have contributed to their repression by making deals with oppressive heads of state; opening doors to dictators; and censoring key activists, journalists, and other changemakers throughout the Middle East and North Africa, sometimes at the behest of other governments,” stated one open letter decrying the censorship of activist accounts from the Middle East and North Africa.
Similarly, over the last few weeks, several users have complained that posts that simply mention the word Palestine were taken down citing a violation of “community guidelines.” Meta’s stated information about its efforts to tackle misinformation in this crisis, however, says “content containing praise for Hamas, which is designated by Meta as a Dangerous Organization, or violent and graphic content, for example, is not allowed on our platforms.” This, coupled with the fact that a bug on Instagram auto-translated Palestinian users’ bios to insert the word “terrorist” in them, speaks to the conflation of Palestine – and Palestinians – with militant groups.
We Are Officially in the Epoch of Disinformation
Meta also claimed that reduced reach on Instagram stories was a bug that “affected accounts equally around the globe – not only people trying to post about what’s happening in Israel and Gaza – and it had nothing to do with the subject matter of the content.” This, however, runs contrary to reports detailing pro-Palestininian content being shadowbanned – prompting activists and users to resort to symbolism and other tactics to circumvent the shadowbanning.
There’s a precedent for disproportionate flagging of content as violations of community guidelines. One mixed methods study found that the groups whose content is frequently flagged are conservatives, trans people, and Black people. However, “conservative participants’ removals often involved harmful content removed according to site guidelines to create safe spaces with accurate information, while transgender and Black participants’ removals often involved content related to expressing their marginalized identities that was removed despite following site policies or fell into content moderation gray areas,” the study noted. The open letter from journalists, activists, and human rights organizations calling upon social media companies to stop censoring marginalized voices in the Middle East and North Africa added, “Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic that they cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making,” citing examples from Tunisia, Egypt, Syria, and Palestine, when activist accounts documenting war crimes and expressing dissent were suspended or blocked.
In this current crisis, the same hasn’t been reported with respect to pro-Israel content – or even content mocking verified facts about the humanitarian crisis in Gaza.
The disproportionate erasure of voices supporting Palestine enables a dangerous narrative about them. For one, it allows the notion that information about civilian casualties is fake. Then, it also constructs the idea that there are no credible reports about civilian casualties.
This is a strategy with precedent too. In 1933, Joseph Goebbels, who headed the Reich Ministry of Public Enlightenment and Propaganda wrote: “The essence of propaganda consists in winning people over to an idea so sincerely, so vitally, that in the end they succumb to it utterly and can never again escape from it.” Goebbels is believed to be responsible for the messaging that diluted the inhumanity of the Holocaust, purely through strategic presentation of biased information and censorship of any ideas and perspectives that could eclipse the Nazi propaganda. An eerily similar pattern seems to be playing out today, except on social media where the message reaches millions globally.
In other words, memes and videos mocking Palestinians constitute propaganda – and dangerous propaganda at that. It prompts the question: are social media platforms’ terms of service adequate? Currently, the emphasis is on curbing misinformation: fake news reports and unverified claims fall under this ambit. It also prohibits hate speech, which focuses on symbols, words, and other tangible signs of violent language. The content of the memes and videos in question, however, circumvent these guidelines – they neither explicitly contain fake news, nor do they feature symbols, language, or words that could constitute hate speech. They’re skits and suggestive images. On their own, they don’t immediately violate any of the guidelines – despite the fact that the purpose they serve is disinformation and hate speech.
How ‘Political Gaslighting’ Undermines the Truth
Experts have pointed this out repeatedly. That’s how this has happened before: hate speech against vulnerable groups bypassed content moderation policies and community guidelines on social media either due to translation and language shortcomings, or the use of rhetorical devices.
Earlier, the Rohingya community sued Meta for just that: accusing the company of failure to police content that led to violence against the community. An investigation by Reuters into the matter found: “For years, Facebook – which reported net income of $15.9 billion in 2017 – devoted scant resources to combat hate speech… To this day, the company continues to rely heavily on users reporting hate speech in part because its systems struggle to interpret Burmese text. Even now, Facebook doesn’t have a single employee in the country of some 50 million people. Instead, it monitors hate speech from abroad. This is mainly done through a secretive operation in Kuala Lumpur that’s outsourced.” In the world of social media, then, might – in the form of more users with better internet access – determine what’s right, often pushing majoritarian narratives and brushing marginalized experiences under the rug.
In Sweden, a genre of memes depicted poor migrants begging for money – in articulations that were much more restrained than are commonly associated with racist hate speech. There’s a reason for that: it was framed as satire.
And yet, veiled mockery and discrediting established facts through satire continues to be a form of information and rhetorical manipulation often aimed at the marginalized or powerless party to a conflict. In the particular instance of anti-Palestinian humor, there’s an argument to be made for the need to update the definition and ambit of hate speech. “A common trait among hateful memes is the (strategic) blending of hate speech with humor, which downplays prejudice, obscures the underlying hatred, and may ultimately lead to the normalization of hostile beliefs,” notes one study. In fact, humor in particular makes the identification of hate speech trickier – because stereotyping and other implicit forms of expression are so indirect, it can be left open to interpretation. “In that way, hate speech can, but does not have to be unlawful to be harmful… As a result, such forms of hate speech may not always be categorized as prohibited content and may be inconsistently regulated by platforms,” the study adds, about what it calls humorous hate speech. Another paper on the use of racist stereotypes and digital Blackface in Spain notes that often, Internet humor works by pretending to be one thing but implying something else, which means “the limits of humor on social media remain an unresolved issue and a challenge for platform governance.”
Instead, Palestinians still have to "audition for empathy and compassion," as Hala Aylan, a Palestinian writer, recently noted. “A slaughter isn’t a slaughter if those being slaughtered are at fault, if they’ve been quietly and effectively dehumanized – in the media, through policy – for years. If nobody is a civilian, nobody can be a victim.”
This is a dystopian reality. “Bombing whole neighborhoods, killing children, cutting electricity and water. Then, lying and playing the victim! And now, mocking the humanitarian crisis,” an Internet user lamented. With people posting incredibly jarring memes about Palestinian children the question is: Is the algorithm designed to favor power, no matter how it’s expressed?
Note: This article has been updated to include more information and context about social media content moderation policies.
Devrupa Rakshit is an Associate Editor at The Swaddle. She is a lawyer by education, a poet by accident, a painter by shaukh, and autistic by birth. You can find her on Instagram @devruparakshit.