What the ‘Dead Internet Theory’ Predicted About the Future of Digital Life
As AI takes over more content, it brings us closer to a conspiracy theory that predicted just that.
The internet may have died. And in its wake, the line between humans and bots gets more blurred by the day. The “dead internet theory” was a fringe conspiracy, but it contains some nuggets that feel true to our age. Some experts have begun to predict that much of the internet will be generated by AI soon — and it brings back questions about authenticity and humanity online that an outlier coterie of users asked a few years ago.
“The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population,” said a user on Agora Road, who pioneered the theory itself. The user claimed that the internet “died” in 2016 or 2017 — and that what we see now is artificially generated content that’s primed to get us to buy more stuff. “… unlike the internet’s many other conspiracy theorists, who are boring or really gullible or motivated by odd politics, the dead-internet people kind of have a point,” noted Kaitlyn Tiffany in The Atlantic. “Everything that once seemed definitively and unquestionably real now seems slightly fake,” said Max Read in New York Magazine. Indeed, several metrics like engagement and traffic, measured through clicks and likes, can be bought and bot, both.
It turns out that the theory offered some prescient breadcrumbs to a future that feels inevitable now: the internet is more populated by bots and AI-generated content than ever, and experts predict that the tide is soon to swell. “… in the scenario where GPT-3 ‘gets loose’, the internet would be completely unrecognizable” said Timothy Shoup, from the Copenhagen Institute for Futures Studies.
Shoup predicted that by 2025-2026, nearly 99% of the internet will be generated by artificial intelligence. GPT-3 is Generative Pre-trained Transformer — a machine that uses pre-existing data to generate human-like text. And it could get harder to tell the difference.
Things got so bad that at one point that people feared an “Inversion” on YouTube — wherein the algorithms would identify bots as humans and humans as bots. Fears of the Inversion may be exaggerated, but they still reveal nuggets of truth about how a lot of internet discourse functions. An Imperva report last year found that nearly a quarter of all online traffic is attributable to “bad bots” — or bots that spread malicious attacks like “web scraping, competitive data mining, personal and financial data harvesting, brute-force login, digital ad fraud, spam, transaction fraud, and more.”
“Auditory and visual recordings of an event are often treated as a truthful account of an event. But what if these media can be generated artificially, adapted to show events that never took place, to misrepresent events, or to distort the truth?” observed the European law enforcement group, Europol, that also recently expressed concern over AI taking over internet content.
Related on The Swaddle:
A Google Employee Claimed an AI Chatbot Became Sentient
These concerns came to a head with the recent surge in popularity of AI art — where open source AI software recently allowed users to edit real faces while generating art. It not only makes an existing deepfake problem potentially worse — it also disadvantages artists by easily (and arguably, dishonestly) generating a craft that takes years to learn. More recently, music producers have expressed concern about AI music generators threatening the music industry itself.
AI has even entered literature — with a book of poetry having entered bookstores already. A machine screenwriter named Benjamin has already written and released its debut film. There are even robot journalists, who have begun to churn basic news stories much faster. And there are AI podcasts that let us converse with dead people.
But it’s not just the tangible aspects of content that we can see, hear, or interact with. Algorithms are influenced by AI too— and they determine who and what we get to interact with online. Even the systems designed to root out AI-generated content that’s misleading, use AI.
The problem, however, isn’t so much the fear of machines taking over — it’s how the machines make us feel a little dead inside ourselves. Take the fact that the abundance of AI and bots online could even be changing how we, the real humans, behave — how much are we really in control of what we say and do online, and at what point do we become indistinguishable from bots ourselves? Charlie Warzel, a New York Times journalist, spoke about trending topics on Twitter designed to fuel the age of “context collapse” — where random events are made to look like significant pop culture events, getting millions of people to talk about it.
Moreover, the advent of AI is leading humans to trust human decisions less than they do AI. And AI mediating much of the digital platforms we spend our lives on is changing how we engage with things.
“The big platforms do encourage their users to make the same conversations and arcs of feeling and cycles of outrage happen over and over, so much so that people may find themselves acting like bots, responding on impulse in predictable ways to things that were created, in all likelihood, to elicit that very response,” Tiffany observed.
The Turing Test, designed to test how much a machine can imitate human beings, may stand between us and AI for the time being. So far, no AI has passed it. But considering that human beings themselves aren’t passing the test — opens up a world of questioning about how the internet is changing us.
The dead internet theory may be a theory for now — but given the trajectory of the internet, it could very well have been an oracle that we didn’t heed on time.
Rohitha Naraharisetty is a Senior Associate Editor at The Swaddle. She writes about the intersection of gender, caste, social movements, and pop culture. She can be found on Instagram at @rohitha_97 or on Twitter at @romimacaronii.