Lensa AI has taken over Instagram. It’s an artificial intelligence feature that allows users to upload their pictures and generate “magic avatars” — lush, art-inspired portraits of themselves. These are compelling images, but controversy surrounds the ethics of the platform. For one, it infringes on artist copyright. Even worse, it facilitates easy creation of deepfakes — synthetic media in which a person’s image is manipulated to advance false information about them.
Lensa has come under criticism in the last few days for how the platform makes it extraordinarily easy to generate NSFW portraits of someone based on a few of their images, easily sourced from their social media — an ability it shouldn’t be able to have. To test this, The Guardian uploaded photos of Betty Friedan, Amelia Earhart, and Shirly Chisholm — a feminist pioneer, the first woman in aviation, and and the first Black woman elected to Congress in the United States, respectively. Lensa returned highly sexualized images of the three: most significantly, of Earhart lying naked in bed.
“AI art generators evade content moderation entirely,” wrote Olivia Snow in Wired. Snow tried the same experiment, even including photos of herself as a child and as an adolescent. Lensa returned images that were highly sexualized, incorporating childlike features onto an adult body.
And yet, Lensa has shot to popularity at an alarming rate, becoming one of the most downloaded apps across various stores. It comes on the heels of other easily accessible content generated by AI for a fee. This is a shift in the world of tech, and what makes it different is the proliferation of generative AI that embedded into user-friendly apps to generate content.
Related on The Swaddle:
New Experiment Shows Robots With Faulty AI Make Sexist, Racist Decisions
In tech circles, 2022 has been declared the year of generative AI. Venture capital firms tout generative AI as “the beginning of a platform shift in technology.” And generative AI is behind concerns over the future of writing, music, journalism, and of course, art. The verdict of business and finance media is in: generative AI is the new frontier for development in the tech space, with many venture capitalists looking to invest in generative AI-based startups. It speaks to how generative AI is fast becoming a highly lucrative business, almost the new frontier when it comes to monetizable trends in tech. But this is a slippery slope.
Deepfakes have existed before generative AI became popular, but creating them required some degree of technical expertise and know-how. Even before generative AI apps like Lensa, deepfakes wrought havoc on people’s lives. Women who had never taken nudes of themselves found their naked pictures posted online. Deepfake porn emerged as a new genre itself, starring people who never consented to their pictures being used this way. Deepfakes have even facilitated identity theft and impersonation. It’s started to compromise medical privacy too: a Nature study showed how it’s possible to generate deepfake electrocardiogram readings, pointing to how vulnerable this information is when it exists in the public domain.
Apps and websites that offer such deepfake services are usually taken down quickly. But now, investors are actively soliciting generative AI startups. As a result, the data that we relinquish privacy over can come back to haunt us with the advent of generative AI. What’s at stake here is nothing short of our agency over our voices, skills, labor, images, and speech — in short, everything it takes to make us unique as individuals. While generative AI seeks to change how content is created (thereby devaluing human creativity), it also legitimizes the use of our likenesses for sale.