New Research Links Social Bias to How People Recognize Faces
The findings have implications for the use of facial recognition technology for surveillance and law enforcement.
The face is a portal into someone’s thoughts, feelings, intentions. How people perceive others’ faces, however, depends on their prior understanding of the person’s personalities, a new research notes. That is, it is not only a face’s visual cues, such as nose or chin, which determine how someone perceives a face.
The research, published in Cognition, found that if someone thinks two individuals have similar personalities, the person is likely to think they have similar faces too.
“Our findings show that the perception of facial identity is driven not only by facial features, such as the eyes and chin but also distorted by the social knowledge we have learned about others,” Jonathan Freeman, an associate professor of psychology at New York University and the paper’s senior author, noted. He adds people tend to bias facial recognition towards alternate identities “even though those identities lack any physical resemblance.”
To show this, Freeman, along with two other researchers, conducted a series of experiments testing the perceptions of famous individuals as well as those not as well-known. They employed a technique called “reverse correlation,” where they were able to generate face images in the participant’s head. If, say, someone thought Russian President Vladimir Putin had a similar personality as pop musician Justin Bieber, the participant found greater similarity in the two faces.
The finding explains how facial recognition unfolds in the brain, noting prior social knowledge about a person plays an active role in visualizing faces. “If the perception of others’ faces is systematically warped by our prior understanding of their personality, as our findings show, it could affect the ways we behave and interact with them,” Freeman notes.
Since social knowledge or perceived personality shapes face recognition, it’s important to interrogate how this process unfolds. Freeman, in another study conducted last year, pointed out “we form spontaneous judgments of other people that can be largely outside awareness.”
The findings help to explain biases and prejudices against marginalized communities. If people from a region, caste, or racial community are believed to have similar personalities or ideologies, people may think they look alike. This narrative is quite common. A study from 2011 showed that people find it difficult to distinguish between individuals of other races, also called the “other-race effect.” “It could be because people of other races are generally perceived to have fewer unique personal attributes and, therefore, to have more in common with one another,” The Guardian noted.
Despite how common this is, the behavioral tendency is harmful. There is a cost to thinking all Black faces, or faces of people from Northeast India, or faces of people from a minority religious community like Muslims, look the same.
Related on The Swaddle:
A separate study conducted last month expands on this. It noted that judgments based on face type could amplify biases, explaining how facial features trigger racial biases. Black faces were linked to being “more aggressive,” a bias that altered people’s behavior.
“We all form first impressions when we encounter a new person. People should be aware that these impressions may be based on something as shallow as facial features and do not indicate future behavior. We should be aware of our own biases,” Heather Kleider-Offutt, an associate professor of psychology and neuroscience, and the study’s lead author said.
This behavioral bias naturally has implications on the use of facial recognition technology (FRT) — a system used to surveil people based on how they look. FRT’s use is particularly ramping up in India. The Lucknow government is employing FRT to alert the police if a woman is in distress. Law enforcement agencies used FRT to track down protesters present at New Delhi’s Red Fort on January 26th, 2021, the site of violence between police officials and farmers. This technology was also used to track protesters against the Citizenship Amendment Act. In a seemingly innocuous setting, the Central Board of Secondary Education (CBSE) used FRT to match admit card photos on record to match students logging in to give their board exams — without students’ consent. The Indian government is also mulling its use to track Covid19 vaccination among people.
Experts have raised concerns over the accuracy of these systems, pointing out the application of inherent biases. The Association for Computing Machinery called to suspend facial recognition technology, citing that there’s a “clear bias based on ethnic, racial, gender, and other human characteristics,” Nature reported last year. In 2019, a tool wrongly flagged a Brown University student as a suspect in Sri Lanka bombings, and the student went on to receive death threats. In a National Institute of Standard and Technology report, researchers found FRT falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. Most facial recognition algorithms exhibit bias, they noted.
Experts fear enforcement agencies can employ the technology in a discriminatory manner, further harming marginalized populations. The Internet Freedom Foundation noted last year “the impact on marginalized communities gains special importance for us locally due to the wide inequality and diversity present in our [Indian] society.”
In the end, these technologies become ways of furthering bias, rather than challenging it. “Algorithms used in artificial intelligence are only as good as the data used to create them—data that often reflects racial, gender, and other human biases,” researchers noted in The Regulatory Review.
Saumya Kalia is an Associate Editor at The Swaddle. Her journalism and writing explore issues of social justice, digital sub-cultures, media ecosystem, literature, and memory as they cut across socio-cultural periods. You can reach her at @Saumya_Kalia.