Why We Really Shouldn’t Be Training AI to Decipher Facial Expressions
An embarrassed smile after a fall isn’t a sign of joy, and a murderer crying at a funeral isn’t a sign of sorrow. Researchers recently concluded that facial expressions are notoriously inaccurate measures of what we’re really thinking. Yet, technology to determine emotions via facial expressions remains under development.
The problem with building such technology is the potential ethical red flags and privacy concerns it could create. Standardizing facial expressions to correlate to specific emotions is impossible, and feeding technology pre-conceived notions of what emotions certain expressions denote could lead to mishaps. For example — if a certain set of expressions were to be categorized as guilt, then the algorithm could flag those expressions as intent to commit an act that might make one feel guilty. Now imagine this context within the criminal justice system. That’s a lot of responsibility to place upon a bunch of facial muscles.
Findings from the research, presented at the American Association for the Advancement of Science in Seattle, studied muscle movement in an individual’s face and compared it with their actual emotions. Researchers found that an individual’s facial expressions almost never lined up with what they were really thinking.
“It’s important to realize that not everyone who smiles is happy. Not everyone who is happy smiles. I would even go to the extreme of saying most people who do not smile are not necessarily unhappy. And if you are happy for a whole day, you don’t go walking down the street with a smile on your face. You’re just happy,” Aleix Martinez, one of the researchers and a professor of electrical and computer engineering at The Ohio State University, said in a statement.
Related on The Swaddle:
According to Martinez, context, social norms, and cultural backgrounds are big influencers of how we react to situations. Often, people prefer to smile in complex situations due to an obligation to social norms, or react in certain ways that are unique to their cultures, or even due to life-experience and moods. People laugh at funerals due to anxiety — would that make them murderous? Hardly likely.
However, this doesn’t mean attempts to teach algorithms and artificial intelligence to understand human behavior are futile. According to Martinez, there is more context one can add beyond facial expressions in order to understand an individual’s emotions. Facial color and body language are other indicators of what an individual might be feeling. As an experiment, researchers showed people a close-cropped photo of a man’s face, which looked like he was angry and shouting. “When people looked at it, they would think, wow, this guy is super annoyed, or really mad at something, that he’s angry and shouting … but when participants saw the whole image, they saw that it was a soccer player who was celebrating a goal,” Martinez said.
While Martinez continues to believe in the development of computer algorithms to understand social cues and intent, he added, “you are never going to get 100% accuracy.
“Deciphering a person’s intent goes beyond their facial expression, and it’s important that people — and the computer algorithms they create — understand that.”