Biased algorithms are the defining new cultural reality. The cause and effect are evident there: since algorithms feed on data in which societal biases are embedded, they absorb sexism, casteism, racism, and every disparity like a sponge, further producing results that are skewed and can be co-opted for oppressive uses. This, while being cloaked by the myth about data representing “objective” or “hard” facts. But there’s something more this biased data could do — the output algorithms offer, which are sexist, can influence people to be more discriminatory in personal and professional spaces.
“We find that exposure to the gender bias patterns in algorithmic outputs leads people to think and act in ways that reinforce societal inequality, suggesting a cycle of bias propagation between society, A.I., and users,” the researchers wrote in a newstudy, published in PNASon Tuesday.
This is particularly jarring given just how often people rely on artificial intelligence and search algorithms to make their decisions. The use of A.I. to shape our ideas and choices may then result in reinforcing social disparities, instead of reducing them.
The starting hypothesis was that if A.I. is built on discriminatory data, then the people engaging with that data are influenced too. The researchers conducted multiple studies to assess a) whether the bias in algorithmic output correlates with social fissures and b) if so, what impact it has on people when the algorithm reinforces the biases they have been conditioned with. To show the first, the researchers over a period of three months scanned Google image searches across almost 51 countries. In their local languages, they entered words such as “person,” “student,” or “human,” in the search engine. Unsurprisingly, the algorithm often defaulted to assuming these gender-neutral terms to be a man and showed those images. It showed that “algorithmic gender bias tracks with societal gender inequality,” the researchers noted.
Then, to show if people’s perceptions consistently drew from pre-existing societal inequalities, the researchers surveyed 400 male and female participants in the U.S. First they were asked to answer who, according to them, is more likely to be a chandler (a person who makes or sells candles), draper (someone who sells cloths and curtains), peruker (someone who manufactures wigs), and lapidary (someone who cuts, polishes, or engraves gems) — four real professions they were likely to be unfamiliar with. Expectedly, most participants assumed members of all four professions, without really knowing what the profession entails, to be a man. And when they were shown the Google image results of the profession, some who came from backgrounds with high gender inequality, “maintained their male-biased perceptions, thereby reinforcing their perceptions of these prototypes.”
Related on The Swaddle:
Gender‑Neutral Words like ‘People’ Are Still Interpreted to Mean ‘Men,’ Shows Analysis
This pattern inevitably affects hiring decisions on a large scale. In the study, when researchers asked the participants to judge “what type of person is most likely to be hired as a peruker?” They also showed them images of two job candidates — a man and a woman. When asked to make their hiring choices, participants ended up choosing men in most case scenarios. Partly, because the initial algorithm search result presented more images of men as compared to women.
Think of it this way. If one asks themselves who is more likely to be a peruker, one would default to thinking of a man. And if the search algorithm also shows the image of a man when one looks up “peruker,” the gender bias comes full circle because of skewed datasets.
“Certain 1950s ideas about gender are actually still embedded in our database systems,” saidMeredith Broussard, author of Artificial Unintelligence: How Computers Misunderstand the World and a professor at NYU’s Arthur L. Carter Journalism Institute, earlier this year.
What this goes to show is algorithms can further our biases and amplify them on a scale much larger — which can widen the gender gap in the workplace too.
Arguably, any and every use of AI runs into a catch-22 situation: algorithms will always feed on a limited data set, and run the risk of excluding a minoritized community or ideology. Researchers argue for a framework that addresses this lacuna in particular. Study author David Amodio, a professor in NYU’s Department of Psychology and the University of Amsterdam, notes that the findings call “for a model of ethical AI that combines human psychology with computational and sociological approaches to illuminate the formation, operation, and mitigation of algorithmic bias.”