Over 27 rights groups have called on Zoom to end its ongoing explorations of a facial recognition technology that can read people’s emotions. The video conferencing platform, which became a mainstay of work interactions during the pandemic, announced a new feature called “Zoom IQ for Sales” last month. The software will provide post-meeting analysis of participants’ “sentiments” — especially during sales calls, Protocol reported. “You will be able to measure that they weren’t very well engaged,” Josh Dulberger, head of Product, Data and AI at Zoom, told Protocol.
“This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights,” the rights groups said, in an open letter to Zoom’s CEO Eric S. Yuan. The activists went on to call the technology punitive, manipulative, and based on debunked science. “It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts,” said one meta-analysis.
The chief concerns over the software, though, are how its usage could prompt discriminatory practices due to in-built racial bias in AI, or prompt punitive action against individuals for facial expressions misinterpreted by the tech.
The idea behind the technology isn’t new — “sentiment analysis,” as it is called, has been around since the early aughts and continues to be widely used in sales and marketing divisions of companies. The idea is for sales representatives to have a tool guiding them on clients’ responses to tune their pitch accordingly, and learn how to close deals faster by recognizing negative emotions like disinterest or frustration.
Take Zoom’s own description of the feature. One metric the software will track is “filler words” — “Filler words, such as ah, um, and hmm, can be an indicator that the sales rep is not familiar or confident with what they are saying,” the company says. Talk-listen ratio, talking speed, and patience are some of the other metrics besides sentiment analysis that the tool will offer.
Related on The Swaddle:
The Tech Industry’s Sexism, Racism Is Making Artificial Intelligence Less Intelligent
Zoom is also a strategic investor in voice analysis startup named Observe.ai, which provides a tool to track voice and text conversations — facilitating employers to target “dreadful experiences” in engagements.
But many experts are wary — not least for the fact that it is dystopic to expect an algorithm to read human faces better than human beings themselves. “Our emotional states and our innermost thoughts should be free from surveillance,” Daniel Leufer, Senior Policy Analyst at Access Now, notedin a statement — adding that even if the tech is accurate (which it isn’t), it has no place in our society.
Previous research has found, moreover, that emotional-detection tech can assign negative emotions or attributes to people of certain races — specifically black men’s faces. Another study also found that people’s facial expressions can and often do differ from their actual emotional reactions.
“These tools can take us back to the phrenological past, when spurious claims were used to support existing systems of power… A narrow taxonomy of emotions… is being coded into machine-learning systems as a proxy for the infinite complexity of emotional experience in the world,” writes researcher Kate Crawford.
Indeed, emotions are complex and ever-changing; interpreting them to a 100% degree of accuracy is arguably impossible. But under the auspices of big tech, the richness of the human condition is often abstracted into simplistic models, putting people and their personhood into boxes for the sake of profit.