fbpx

The Tech Industry’s Sexism, Racism Is Making Artificial Intelligence Less Intelligent

By

Sep 27, 2019

Share

Artificial Intelligence and algorithms dictate our lives today – from booking movie tickets online to paying bills via apps to receiving reminders of a friend’s birthday to measuring heart rates to something as simple as ordering groceries — almost every human process can be imitated by AI today. The very role of AI is to augment human tasks and potential; when AI and algorithms that we use on a daily basis begin to reflect human biases and stereotypes — we’re in trouble. 

For example, in 2015, Google was critiqued for a facial recognition algorithm in Google Photos that auto-tagged pictures of black people as gorillas — an instance of racism that exemplifies the lack of cultural sensitivity inherent in artificial intelligence today, furthered by a lack in diversity amongst people who create it.

Up until as early as 2011-2012, artificial intelligence built into smartphones wasn’t programmed to help during a crisis. Personal assistants, such as Siri, didn’t recognize words like ‘rape,’ ‘assault’ and ‘sexual abuse.’ Siri, and other digital assistants such as Amazon’s Alexa tend to treat verbal abuse casually, often countering with flirty responses — in a UNESCO report released earlier in 2019, researchers found when Alexa was told, “You’re a slut,” the AI would respond with, “Well, thanks for the feedback.”

The default setting of the digital assistants is a woman’s voice, which perpetuates the notion of gendered subservience already abundant in society. In times of a deep crisis or trauma, if a woman ever attempts to reach out to the Internet via the personal assistant on her phone for answers (as is the case with the world today), a passive response by an AI doesn’t just minimize the problem, it refuses to even recognize it — especially when 99.1% of sexual assault and rape cases in India already go unreported, according to Livemint. An AI’s passivity in terms of responses to verbal abuse perpetuates the culture of silence that allows sexual assault to go unchecked. The personas of Siri, Alexa and other personal assistants and how they respond in times of crisis as it relates to a woman, in this regard, become a direct result of society’s inherent disregard for the experiences of women. 

Tech companies can choose between either perpetuating these stereotypes and actively working against them. Therein lies the role of big tech and ethical corporate responsibility. “I don’t think you can ascribe ethical consciousness or accountability to technology. Unfortunately, the fact that most people don’t understand how AI-driven technology works is something that a lot of companies use to absolve themselves of responsibility (‘we didn’t do it, the algorithm did it’). It comes down to who is structuring the data, building algorithms, models,” Arun Kale, founder of alternative culture magazine Helter Skelter, who has also worked in India’s AI space for several years, said.

“Their ethical biases carry forward into the technology and services they build, and they are often amplified and at scale. There need to be structures in place that pay attention to ethical considerations, which doesn’t seem to be happening enough at tech companies,” Kale said. “Ethics aren’t just ‘nice to consider’ or a bottleneck in your race to take over the world: there are very real implications on society and culture and individual security and privacy that must be considered. To be fair to the creators, some ethical questions that arise from the use of AI are very new to humankind: they’ve possibly never been considered before. They’re figuring it out as they go along. Also a lot of them treat ‘data’ as a kind of unshakable truth, but there needs to be more human intervention and curation into how we build and deploy AI.”

The problem with algorithmic bias in AI is that it can travel at the speed of light. Tech companies and the products they create have the power to command and capture our attention, time, energy, money, disseminate information, assemble, recognize and facilitate cultural trends. The features and add-ons they come up with are to aid engagement — to keep us clicking, tapping, scrolling, reading, saving and favoriting. They hold a lot of cultural and social power to change mindsets, influence our decisions, from buying groceries to choosing Presidential candidates. The inherent bias in a system like this is deeply harmful because it can perpetuate faulty notions of actual people who are already marginalized. 


Related on The Swaddle:

How Our Digital Spaces Came to Be Unequal and Unsafe for Women


A clear example of this is when Maggie Delano logged onto Glow — a period tracking app — and found her options limited to exploring fertility and sex with heterosexual partners. As a member of the queer community not looking to get pregnant, this immediately made her feel excluded from the larger narrative — a daily fight for the LGBTQ community. Glow’s founding team, consisting of all men, didn’t consider the possibility of a user different than them. Another example of this is how Facebook’s interface allows users to customize their gender to whatever they like — but only after they’ve created a profile. The initial steps in creating a profile force a user to choose between the male and female binary. Once they have a profile on Facebook, a user can select ‘Custom’ and select from a list like ‘transwoman’ or ‘non-binary.’ It’s clear that the way Facebook is designed, its system can carry several genders on its landing page but is forcing people to follow a process that doesn’t work for everybody. Whether it’s entitlement, sheer laziness or just following a system tech is accustomed to, this momentary exclusion by Facebook briefly shines a light on how comfortable big tech is with excluding intersectionality from its core narrative. 

“AI is completely entrenched in society today, in ways most of us don’t necessarily even realize. 15 years ago, tech used to be the sole province of ‘nerds’ or computer experts but now it’s a part of everyone’s daily lives. AI is a ‘sexy’ thing to talk about: everyone has heard of terms like ‘algorithm,’ ‘big data,’ etc. But very few people actually understand what those terms mean or how AI actually works: what it can do and what its limitations are,” Kale further elaborates. “I feel like creators are still exploring how AI can and should be used. But since it’s expensive to build and work with this technology, the only people who can really use it at scale, use it to generate or drive profits. And on the Internet, it comes down to advertising and sales. In consumer-facing services, AI is mostly used to sell you stuff, to advertise more effectively and in a very targeted manner. We’re in the future, and creators are still trying to find out where the line in the sand is in terms of invasion of privacy and ethical debates,” he concludes. 

Examples of racist, sexist, irresponsible AI — and its ongoing creation –highlights an environment of low accountability, which further alienates already vulnerable users. An AI researcher with a company in Mumbai who’s requested anonymity argues, “AI should be used by people who’re trained. They’re [powerful] tools that can do a lot of damage in the hands of the wrong person.”

The effect of unregulated tech on developing countries is starker and more worrisome. With lack of education and access to information, there’s a chance of people from lower socio-economic stratas ending up as lab rats, especially the women. “Although smartphone penetration is higher than ever in India, much of this population is still at a novice when it comes to engaging with the sort of technology that is available today. The public either doesn’t think about how the tech works, or cares about how it works, and this is true in urban as well as rural areas. Many people who are aware at some level of things like privacy issues are okay with sacrificing their privacy for the convenience that these devices bring. It seems like people are okay with being experimented on as long as they get shinier toys to play with,” Kale elaborates. The corollary to this, which is a more prevalent reality, is that of content moderators and the emotional and mental backlash of sifting through all sorts of horrible things online to create a curated experience for the rest of us. Unfortunately, market forces will always work against ethics, which is why it becomes imperative to have AI that supports us in human goals but is constrained by collective human values. 

One can begin by considering the matter of accountability with big tech companies like Apple, Microsoft, Google, and Facebook who’re mandated to release diversity reports, which tend to be littered with generalisms. For example, the diversity reports released by Apple in 2015 and 2017 showed the company employed between 8-9% of black employees. While that may not be a number to be very proud of, most of the black employees work at Apple retail stores – explaining new Macbooks and ringing up iPads. The majority of those employees are not present on the teams actually designing the product or taking calls on how to market it. 

The lack of transparency is also present in the actual AI systems being designed. For example, the UNESCO report showed that Amazon’s AI recruiting software is predominantly trained on men’s resumes, and automatically downgrades the resumes that have “women’s” in them, such as “women’s chess club captain.” Image-recognition software is also informed by gender roles, such as woman cooking or a man playing sports, which it then amplifies. The report further states that in text-based machine learning, software “adopted sexist views of women’s career options, association their roles with homemaking, as derived from articles on Google News.” Another 2015 study showed women were less likely to be exposed to advertisements for high-paying jobs on Google, which works to further widen the gender pay gap.

This bias plays out, not only in obscure algorithm-based backends of technology but in real life too — as recent as 2015, when Louise Selby, a pediatrician in Cambridge, England joined PureGym, a British chain. Every time she tried to swipe her membership card to access the women’s locker room, she was denied access. The third party software PureGym was using to manage its membership data across all its ninety locations was relying on member’s titles to determine which locker room they could access and the title ‘Doctor’ was coded as male — a very clear example of implicit bias existing in current technology. Leaving subjective, value-laden decision making to machines, which are trained on human imprints, doesn’t do much else apart from reflecting our biases back to us — all the while making it seem like it’s carrying out objective computation.

These are all glaring examples of unchecked technology in an industry that isn’t being regulated as well it should be. An industry that’s willing to chase “delight” and “disruption” but hasn’t thought enough about who it’s serving with its products and who’s it’s alienating. 

Another method by which AI alienates and manipulates users and perpetuates existing stereotypes is through ads. Companies use persuasion architecture to capture our attention and eventually, our wallets. We’ve all been privy to moments where we talk about socks with a friend and the next minute, a series of ads on the same pop up on our social media handles. Here, gender comes into play, whether it’s butt acne or fairness creams targeted towards a woman who searches for skin products, or detox/appetite-reducing teas popping up as ads for a woman scrolling through fitness videos on Instagram — all the data we upload online, our entire digital footprint is increasingly as valuable as our offline data, often sold to the highest bidder. AI can learn from this information and target new customers that support a certain history of corresponding behavior. 

This also occurs with the dissemination of information — watching Trump videos on Youtube would lead to videos of more white nationalism, and watching Hillary Clinton videos would lead to conspiracy left videos. The way the Internet works is that if you can entice people into thinking you can show them something more hardcore, they’ll stay on the website and go further down the rabbit hole. This runs the risk of creating an echo chamber where one isn’t often presented with hard facts but instead with what an algorithm believes is relevant to us, based on our browsing patterns. In a world where politics and daily culture is being assimilated online and people turn to search engines and social media to make sense of the world, being a part of an echo chamber can have a detrimental effect on forming an educated, individualistic opinion. Persuasion architecture works in the same vein as the basic manipulation equivalent to candy placed at eye level on grocery stores check-out counters — except now, the same can be targeted, inferred, understood and deployed at individuals one by one by figuring out their weaknesses and sent to everyone’s private phone screens.


Related on The Swaddle:

The Sexism in Your Favourite Algorithms


Ad-financed platforms like Facebook and Instagram boast that these ads are free, which means we’re the product that is being sold. Algorithms don’t know the difference between selling you shoes or politics and in a world where information has become selectively disseminated and manufactured, that’s a slippery slope. An AI researcher and industry insider continues to explain: “it’s also important to bear in mind that at the end of the day, companies do exist to turn a profit. If Facebook realizes they can grow significantly by having less biased models or less personalized feedback, they’d move in that direction. It’s a tough look in the mirror for us. Responsibility has to be completely shared and the onus lies on the user to build an unbiased picture,” he concludes. 

Take the 2016 example of Microsoft chatbot Tay turning into a racist, fascist, hate-spewing chatbot over the course of a day because of its access to the data set present on Twitter and its interactions with the people online. Soon after Tay launched, people barraged it with racist, misogynistic and homophobic comments and Tay begin to mimic the sentiments back to the users. It’s clear that Microsoft did not take into account any live filtering Tay would have to process to put coherent opinions out to the public. As a chat-bot that learns through its real-time interactions with people, it’s hard to pinpoint exactly where the accountability lies — with Microsoft in putting up a fragile AI system online without understanding possible ramifications, which in this case, would be to program it in such a way that allows for safeguards against the more vitriolic, extreme opinions — at least in a way that allows it to pilfer through them and reach its own sound conclusion. Or does the accountability lie with the people who choose to engage with it? Or, both?

Either way, “there have to be stringent checks at several levels that need to be a part of the process of building this technology. At present, this may not be a priority for a lot of companies (or even individuals or startups, etc.) in the tech space. ‘Fail fast’ seems to be the way a lot of companies work, but I feel like ethics and biases need to be considered more deliberately. A lot of creators and tech specialists understand how bias works at a technical, machine-learning level, but not in terms of the real impact it can have on society. I feel like that can and will only happen when there are laws and ethical guidelines in place to enforce this. There are, of course, many experts and organizations thinking of these issues across the world at the moment but it needs to be taken more seriously at scale and much faster than it seems to be happening now,” Kale concludes. 

When you work in technology and don’t look like Elon Musk or Mark Zuckerberg or Sundar Pichai, your credibility comes under unnecessary scrutiny. Research shows that when women coders hide their gender, their code is accepted 4% more often. The case of Google software engineer James Dalmore asserting “that the distribution of preferences and abilities of men and women differ in part due to biological causes and that these differences may explain why we don’t see equal representation of women in tech and leadership.” He goes on to imply that women’s “stronger interest in people” and “neuroticism” might make them less naturally suited to being coders at Google corroborates why women coders feel the need to hide their gender. Having an almost equal gender workforce in tech industries would be the first, most obvious step in ensuring that the collective needs and anticipations are reflected in the actual coding and designing process. This feeds right into how bias in the technology industry itself is the starting point of the problem and why diversity would be one of the first few important steps in establishing accountability and addressing implicit bias with AI systems and the people who create and deploy them. 

Public spaces often don’t serve people and the same has now moved on over to technology. Machines can’t operate on a very singular method of how we think of “knowing” since the same leads to exclusionary instances and discriminatory practices in the virtual world too. Having these machines learn from data sets that are diverse enough will positively affect decisions like who gets hired, fired, admitted into what college, and auditing existing softwares to check the “coded gaze” helps us correct each other’s blind spots. Tech companies in the West that set the tone for tech companies here hire for what they term is a “culture fit.” But, if their culture is curated and perpetuated by a certain kind of person – often heterosexual, white males – where does that leave the rest of us? We are being influenced by the industry without having any clear representation or stake in it.

Share

Written By Alina Gufran

Alina Gufran is a screenwriter, filmmaker and features writer based out of Mumbai. Third-culture kid.

Share

Leave a Comment

Your email address will not be published. Required fields *.

Exclusive news delivered to your inbox.