A recent analysis showed that deepfakes have entered the corporate world — using LinkedIn as their Trojan Horse. The dangers of fake profiles, in the form of deepfakes and otherwise, on the Internet cannot be overstated; from drumming up support for autocrats to spurring racial hate towards minorities, deepfakes have, more often than not, been used for some of the most nefarious activities online.
The new findings, heralded by the Standford Internet Observatory, show how companies are using the fake profiles as a cost-cutting measure for sales and marketing. The saga began with researcher Renée DiResta receiving a message on LinkedIn from someone named Keenan Ramsey — when she noticed something was off. For one, Ramsey’s photograph was vague and hazy: a missing earring; discontinuous hair strands; tthe most tell-tale sign, according to DiResta, was the eyes being centered in the middle of a photo. A blurred background also added to her suspicions. The mysterious profile then prompted DiResta to analyze how many such fake profiles existed on LinkedIn.
The research then led to the finding that over 70 businesses employed the strategy to boost their sales at lower costs. The chain of events goes like this: a fake profile makes a sales pitch to a real person on LinkedIn; if successful, a real employee from the business takes over the conversation.
That deepfakes have entered the corporate networking site has worrying implications. For one, deepfakes are created using artificial intelligence (A.I.) — A.I., in turn, has been shown to have a Euro-centric bias and reinforce Eurocentric beauty norms. Now, in the world of work, a deluge of such profiles could reinforce racialized stereotypes about professionalism too. Researchers have noted in the past that the values embedded in mainstream ideas of professionalism are deeply tied to white supremacy.
The metrics used to gauge somebody on a scale of professionalism remain encoded with markers of whiteness, class, and privilege: all of which are clearly visible on LinkedIn.
Related on The Swaddle:
When Do We Trust A.I.’s Judgment Over Other Humans’?
Previous research showed, moreover, that people tended to rate “synthetic” — or artificially generated —faces as more trustworthy than real people’s faces. A Eurocentric bias, combined with the generation of “synthetic” faces and the trustworthiness factor, the LinkedIn deepfakes could further an existing racial divide in employment and perceptions of trustworthiness.
“That face tends to look trustworthy, because it’s familiar, right? It looks like somebody we know,” Hany Farid, digital media forensics expert from the University of California, Berkeley, told NPR.
In other words, whom we find inherently more trustworthy could stand to be based on racialized notions of professionalism and Eurocentric standards. Thus, while the issue of fake profiles being used to make sales pitches or recruit people may not by itself be a dangerous thing, the implications of A.I.’s deeper involvement in human connections are troubling to say the least.
“Instead of attempting to make a less-than-direct sale, Twitter bots from Amazon and other sources often spread disinformation and propaganda on behalf of companies and governments,” NPR noted.