With Deepfake Porn and God Chatbots, AI Dystopia is Here
Generative AI is worsening multiple socio-cultural crises, due to the false reliance on it as an ‘unbiased’ entity.
Misdemeanors of sexism, racism, and bigotry within AI systems have largely been reported, yet developments within AI have begun to inadvertently impact many more people . “Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice,” political philosopher Michael Sandel told the Harvard Gazette. But human subjectivity is what governs our morality, and this has resulted in the replication of the very biases that we are trying so hard to move away from.
One axis along which this takes place is gendered violence. Deepfakes have been exploiting our privacy since their genesis; generative AI makes the problem worse. The 21st century’s updated version of photoshop, deepfakes are essentially hoaxes in the form of images, videos, and even sound clippings. Used largely to either sway political allegiance, or create revenge porn, deepfakes are an assault on consent. According to the Deeptrace report, 96% of the deepfake videos on the Internet are pornographic videos. To no one’s surprise, the targets of deepfake pornography are largely women.
A recent Forbes report showed there’s an AI porn explosion on Reddit, which has begun to impact real women whose photos comprise the datasets. Reddit’s lax policies as well as users’ ability to remain anonymous sustain the problem. While synthetic images are essentially fake, “the models used to create them are trained on photos of real people.”
“The fundamental issue of generative AI porn is that it relies on the mass-scale theft of people’s, and disproportionately women’s, images and likeness,” said Noelle Martin, a survivor of sexual deepfakes and a legal researcher at University of Western Australia’s tech and policy lab.
Although Reddit has a safety feature to filter adult content, the search for AI Porn results in more than two dozen forums, where these explicit images are not only shared, but also sold. Claudia, an AI creation, posted her (AI-generated) lewd photos on other subReddits, including r/normalnudes and r/amihot earlier this month. Birthed by two computer science students who spoke to Rolling Stone about how they executed this joke after being inspired by catfishing stories, Claudia was a product of Stable Diffusion, which generates text-to-image results using machine learning. Such data systems are trained on millions of real images, and thus the shockingly realistic outputs. Artists and stock-image platforms like Getty have recently sued AI text-to-image generators like Stability AI and Midjourney over copyright violations.
Other porn generating platforms such as PornJourney, PornPen, PornJoy and SoulGen provide subscriptions through anonymous Discord or Patreon accounts connected to their Reddit profiles. Their websites exhibit a variety of ethnicities, body types, and similar features to pick and choose from – in order to create your model, with no disclosure as to how this information has been obtained. There’s only a perfunctory disclaimer that absolves them from any conflict regarding its content – “Any generations of content on this website that resemble real people are purely coincidental. This AI mirrors biases and misconceptions that are present in its training data.”
Related on The Swaddle:
Generative AI Puts Us on the Brink of a Deepfakes Crisis
The biggest player in this industry is Unstable Diffusion, whose AI systems are tailor made for high quality porn. Tech Crunk reports that the server’s Patreon — started to keep the server running as well as fund general development — is currently raking in over $2,500 a month from several hundred donors.
Tori Rousay, an advocacy manager at the National Center on Sexual Exploitation (NCOSE), says that most AI porn text-to-image generators like Unstable Diffusion and Porn Pen use open source models from GitHub or Hugging Face to scrape images from porn websites and social media profiles and build a database of sexually explicit images. Currently, these images are being sourced from social media platforms such as Twitter and Instagram as well.
Although Reddit has specified its stringent stance in regulating AI generated content, these pervasive forums fly under its radar, and are more commonly flagged under its gray zone. Nicola Henry, who has been studying tech-facilitated sexual violence and image-based sexual abuse for 20 years at the Royal Melbourne Institute of Technology, speaks of how AI porn may seem harmless at first glance, but further research shows that some of these systems may be trained on images of underaged individuals which, if true, would be considered as child sexual abuse content.
Although OpenAI and Stability AI have taken measures to remove explicit content from their systems, there are still ways to manipulate the software due to the open access to its code.
AI porn, however, is not all. AI gods have begun to tamper with already shaky socio-cultural currents. In India, Gita GPT is a chatbot that takes the persona of Krishna, and role-plays a sort of therapist for the general public. Created by Bangalore-based techie Sukuru Sai Vineet, the bot provides answers (mostly vague) based on the scriptures of the Bhagavad Gita, and is prompted by the question – “What troubles you, my child?”
Bots playing Gods is an uncharted territory, and can cause much havoc, and already at least five similar bots are already in the rounds. Rest of World, a global tech publication, found that some of the answers generated by the Gita bots lack filters for casteism, misogyny, and even law.
“It feels like this is a great thing [to build] for people starting out in tech, who want to get recognition and respect,” Viksit Gaur, a San Francisco-based entrepreneur and former head of user-facing AI at Dropbox, told Rest of World. “But someone else might pick up on this and say, ‘What if I could use this to shape opinion and drive my own agenda?’ And that’s where things get really insidious. So there is a lot of scope for danger here.”
This “AI-powered spiritual companion” is not as innocuous as it may seem. There have been incidents where these bots have claimed that murder is acceptable if it is one’s dharmic duty. These bots also influence political leanings, where they are seen to be praising the Indian Prime Minister, Narendra Modi, while simultaneously putting down his opposition – Rahul Gandhi. Reportedly, Anant Sharma’s GitaGPT declared Gandhi “not competent enough to lead the country,” while Vikas Sahu’s Little Krishna chatbot said he “could use some more practice in his political strategies.” Bots are ideally not supposed to indulge in personal opinions, and thus these responses are reflective of the structural biases ingrained in the system.
GitaGPT follows predecessors like BibleGPT, and Ask Quran. On this, Vineet, one of the creators said, “So morality is not in the tool, it’s in the guy who’s using the tool. That’s why I’m emphasizing individual responsibility.”
Related on The Swaddle:
New Experiment Shows Robots With Faulty AI Make Sexist, Racist Decisions
AI’s power in political persuasion also spills over into electoral politics. Following President Joe Biden’s announcement that he will be running for reelection in 2024, the Republican National Committee released an AI-generated video titled ‘Beat Biden’, depicting a dystopian version of his potential second term.
In Denmark, a political party led by AI has now been founded, and aptly named – The Synthetic Party. This party was created in May 2022 by the art collective Computer Lars and the non-governmental organization MindFuture Foundation, and is helmed by Leader Lars, who is not a human, but a chatbot with which internet users can converse via Discord, reports Forbes.
Henry Ajder, an advisor on AI policy, further ascertains fears of AI interfering in democracy. He states, “We’ve already seen, even now in 2023, that deepfakes and generative AI have become a massive part of the digital landscape, in terms of memes and satire. So, I imagine in the year of the election in 2024, we will see much more of this kind of content, some of it intended as satire. Some of it is, you know, intended as kind of intentionally deceptive.”
Misinformation and fake news follow the echo chamber theory, where your beliefs are echoed back to you, and thus you are reinforced into believing that everyone around you shares your opinion, due to conscious lack of exposure to the other side. While consuming media, we barely ever stop to question the credibility of our sources, and why that piece of news is being catered specifically to us. As is with all social media outlets, if you aren’t being charged – you (your data) are the product.
A study conducted by the University of Oxford found that despite clear differences in terms of website access, the level of Facebook interaction (defined as the total number of comments, shares, and reactions) generated by a small number of false news outlets matched or exceeded that produced by the most popular news brands. While there are signs that can give away an AI-generated video, such as distorted lighting, blurry edges, or a lack of blinking, Ajder said that expecting voters to discern between real and fake content is not, ultimately, the solution, reports SAN.
A study conducted by IProov reveals that a mere 29% of individuals are aware of deepfakes in 2022. We cannot protect ourselves from an evil we do not know exists, and thus the potential for fraud and misuse only multiplies. AI-generated political disinformation has already made headlines ahead of the 2024 election, as a doctored video of Biden appearing to be giving a speech attacking transgender people shows.
AP News lists a few hypotheticals that could surface during the election season. These include automated calls in the candidate’s voice, deepfakes of the candidate expressing racist views, or confessing to a crime, and fake images designed to look like local news reports.
Although attempts are being made to regulate and counteract misinformation, either through lie-detector tests or Detect Fakes, or even state mandated rules, this new mode of campaigning is trenched in hostility, and serves to incite violence and hate crimes.
Given its power and opacity, AI ethicists believe that regulations around artificial intelligence need to be strengthened, but there is little consensus on how exactly to go about this. “Companies have to think seriously about the ethical dimensions of what they’re doing and we, as democratic citizens, have to educate ourselves about tech and its social and ethical implications — not only to decide what the regulations should be, but also to decide what role we want big tech and social media to play in our lives,” said Sandel.
Hetvi is an enthusiast of pop culture and all things literary. Her writing is at the convergence of gender, economics, technology and cultural criticism. You can find her at @hetviii.k.