YouTube Shorts Is Pushing Transphobic Content, Creators Allege
YouTube Shorts has recently come under the scanner, with users alleging its ‘For You’ recommendation page has a tendency to push transphobic and misogynistic content. “A solid half of the videos recommended to me are intensely transphobic. I hit ‘don’t recommend me this channel,’ but it doesn’t seem to work,” a Reddit user claimed.
Shorts is YouTube’s latest addition, which offers users an endless scroll of short-form video experiences spanning 60 seconds or less — much like its competitor, TikTok. However, YouTube Shorts is not alone in fuelling transphobic rhetoric.
In the past, reports have highlighted how social media platforms have become a breeding ground for hate, where misogyny and transphobia abound. Despite several guidelines being rolled out by these platforms, content moderation has fallen short of curbing hate and misinformation in online spaces. Instead, implicit biases in underlying algorithms have fuelled a growing culture of hate, rendering these platforms unsafe for several people, particularly the LGBTQIA+ community.
“I tried out YouTube Shorts recently and I counted 25 transphobic shorts in the span of half an hour of scrolling. Not a single pro-trans one,” a Twitter user wrote.
Hank Green, a well-known vlogger also tweeted about YouTube Shorts’ skewed content promotion, saying, “TikTok definitely remains better than Shorts at not randomly showing me transphobic/misogynistic content just because I like science stuff and video game clips. It’s like ‘we’ve noticed you like physics, might I interest you in some men’s rights?'”
Related on The Swaddle:
A report pointed out how TikTok’s algorithm is based on one’s preferences and past views, whereas, users alleged that YouTube Shorts is delivering hateful recommendations even if one has not consumed such content in the past. This, then, points to the biases encoded in the underlying algorithms that end up creating online echo chambers, giving users more of the same content. Medianama reported how YouTube’s algorithmically defined recommendations — that make for around 70% of the total time users spend on the platform — often lead people to extremism or nudge viewers towards conservative content.
“I think the misperception is that algorithms are neutral… But the reality is, the disinformation that’s being spread and the way [these algorithms are] funneling people to more extreme content is very deliberate,” Kelsey Campbell, founder of the LGBTQ data analytics organization Gayta Science, said at an event, earlier this year.
TikTok has also been criticized for sending users down rabbit holes that range from transphobia to far-right extremism. Research conducted by Media Matters found that interacting with anti-trans content led to TikTok’s ‘For You’ page being inundated with not only homophobic, transphobic, and misogynistic content, but also spurred racist and white supremacist narratives along with anti-vaccine, antisemitic, and ableist content. “Transphobia is deeply intertwined with other kinds of far-right extremism, and TikTok’s algorithm only reinforces this connection,” the researchers noted, adding that TikTok allowed exposure to hateful content “in a fraction of the time it takes to see such content on YouTube.”
Such content spurs hate and harassment online — trans creators on TikTok pointed out to Insider that while the platform had allowed them to initially find a sense of community, it had also exposed them to widespread transphobia, perpetuated by the platform’s very design. Daniel Sinclair, an independent researcher told Insider that TikTok’s algorithm-driven feed controls and directs what people see much more than on other platforms.
LGBTQIA+ safety has thus become a major concern on the biggest social media platforms, as highlighted by a 2022 report by GLAAD, an LGBTQ media advocacy group. This is despite the various content moderation guidelines that have been put in place by these platforms. For example, YouTube’s Community Guidelines do not allow “pornography, incitement to violence, harassment, or hate speech.”
Related on The Swaddle:
However, content moderation that hinges on the removal of posts and downranking has also raised concerns about violating free speech, especially since it seems to have little effect on the prevalence of harassment online. Attempts to moderate content have also had a disproportionate impact on marginalized groups, including members of the LGBTQIA+ community who have faced censorship online, noted Electronic Frontier Foundation, a non-profit digital rights group.
Research has highlighted how revenue models also play a role in content moderation. “Social media platforms running on advertising revenue are more likely to conduct content moderation but with lax community standards in order to retain a larger group of consumers, compared to platforms with subscription revenue,” marketing professors at the Wharton School of the University of Pennsylvania noted.
Implicit biases, therefore, govern not only the content that users are presented with, but also moderation practices — as seen in the case of shadowbanning, or the practice of hiding a user’s content without their knowledge. A Forbes report highlighted how unconscious biases of developers seep into the algorithms being created, which are then “unable to discern nuances of human appearance, lifestyle, culture, sexuality or social behavior.”
The lack of diversity in AI developers and datasets drawn from an inequitable history further lead to this bias being hardwired into our technology. As Jess Reia, a professor at the University of Virginia, said, “Social media platforms reproduce inequalities that already exist in society.”