How Much Should We Worry About AI Sentience?
Machines may not have gained consciousness yet, but concerns around AI sentience are growing.
AI has been a looming threat to jobs, and even creativity, which was thought to be innately human. In the 12 years since Black Mirror, Netflix’s sci-fi anthology, premiered, we have almost aligned our world with its prophetic vision. We’ve sped past concerns over robots taking over our jobs – they kind of already have, in some ways. In the age of AI, there’s one question that supersedes all else: what happens if the machines become sentient?
That would mean defining what sentience is. A look at the current tech and its capabilities shows how AI as it exists today is enough of a threat to livelihoods and people’s security. 1 in 4 companies (in the USA) used Chat GPT, an AI chatbot by Open AI, to replace their workers, and various individuals fully rely on the bot to work out their assignments, their marketing plans, and sometimes even plan their 5 year life goals.
A BBC article notes the importance of remembering that AI chatbots are just one aspect of artificial intelligence, even if they are the most popular right now. AI is behind the algorithms that dictate what video-streaming platforms decide you should watch next. It can be used in recruitment to filter job applications, by insurers to calculate premiums, it can diagnose medical conditions (although human doctors still get the final say).The more control we disseminate to technology, the more it adapts to human values, and human attributes. Our codependency on technology has long lasting effects on our definitions of intimacy, and personhood, and even sentience.
In simple terms, a sentient AI is one that feels and registers emotions. It can be thought of as abstracted consciousness – the sense that something is thinking about itself and its surroundings. This means that sentience involves both emotions (feelings) and perceptions (thoughts).
Stuart Russell, an Artificial Intelligence researcher explains that sentience is not like replicating walking or running – those activities only require one body part. Sentience requires two bodies: an internal one and an external one (your body and your brain). Sentient beings also have a third thing they need: brains that are wired up with other brains through language and culture.
While there is still no evidence that AI can become sentient beings, it is only a matter of time before these systems can not only imitate human intelligence, but also build upon it. Stuart Russell, in his book ‘Artificial Intelligence: A Modern Approach’, points to the rise of an AI-driven culture where the agent doesn’t blindly follow orders but also tries to understand the nature of the query.
According to Emeritus, Neuroscientist Giulio Tononi’s Integrated Information Theory says that, in principle, it is possible for us to digitize consciousness in its entirety. Moreover, AI researcher and one of the founders of General Language Understanding Evaluation (GLUE), Sam Bowman, claims the plausibility of AI becoming sentient within the next two decades.
Related on The Swaddle:
A Google Employee Claimed an AI Chatbot Became Sentient
Some argue we’ve arrived there sooner. Google made news in 2022 upon its firing of an engineer who claimed that one of the company’s AI systems had “become sentient.” He claimed (publicly) that an AI-driven conversation technology had achieved consciousness after exchanging several thousand messages with it. After asking the AI what sort of things it was afraid of, it responded, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot.”
However, many experts in the field are aligned in thinking that it is improbable for LaMDA (Language Model for Dialogue Applications) or any other generative AI to be close to consciousness.
“My view is that [Lemoine] was taken in by an illusion,” Gary Marcus, a cognitive scientist and author of Rebooting AI, told CBC’s Front Burner podcast. “Our brains are not really built to understand the difference between a computer that’s faking intelligence and a computer that’s actually intelligent — and a computer that fakes intelligence might seem more human than it really is.”
Still, many continue to be concerned about AI sentience, albeit for competing reasons. In March 2023, an open letter co-signed by various persons working within the field of AI, including Elon Musk, called for a suspension in all AI developments until proper security measures could be designed, and implemented.
Elon Musk’s fear of exponential AI developments may be rooted in his belief in longtermism, a philosophy that stems from effective altruism and the notion that longevity and safeguarding the long-term future is a key moral priority of our time. The open letter calling for a ban on all development on AI beyond GPT-4 was published by the Future of Life, a non-profit research organization, which received funding from the Musk Foundation in 2021. Longtermists have been critiqued largely for prioritizing faraway existential risks (the total annihilation of humanity by AI being one of them) over issues such as those of poverty and climate change which pose an immediate threat to humanity.
Emile P. Torres, a researcher in the field of eschatology, and the ethics of human extinction has extensively worked on understanding the threats of longtermism. In their interview with Netzpolitik, they speak of just how pervasive this philosophy is, especially within the tech industry. Superintelligence is a topic for debate among the longtermists because of its divisive and binary pathways: either it could cause total annihilation, or become “a vehicle that will take us from our current position to Utopia, as well as enable us to colonize the vast cosmos and consequently create astronomical amounts of value.”
With such high stakes, the move towards sentience is an existential threat for longtermists, and thus Musk, who has been funding research for the Future of Humanity Institute, based out of Oxford University, wishes to nip this growth in the bud.
The ‘Godfather of AI’, Geoffrey Hinton, quit Google to freely speak out about the risks of A.I, according to The New York Times.
In an interview with The Guardian, Hinton confirms that there has been work on building intelligences that could quite possibly outthink humanity. He goes on to state, “I thought it would happen eventually, but we had plenty of time: 30 to 50 years. I don’t think that any more. And I don’t know any examples of more intelligent things being controlled by less intelligent things… It’s just that my confidence that this wasn’t coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better.”
Related on The Swaddle:
How ‘Longtermism’ Is Helping the Tech Elite Justify Ruining the World
Amy Webb, head of the Future Today Institute and a New York University business professor, presented two probable future scenarios with respect to the influence of AI in our lives – one optimistic, where this development focuses on the common good and transparency, and the other catastrophic, which witnesses an invasion of data privacy, centralisation of power amongst the top 1%, and an anticipation of user needs. She gave the optimistic scenario a mere 20% chance.
While the pathway we choose is entirely out of our (the masses) hands, it is reliant on two key factors – the effort tech companies put into ensuring safety and transparency, and policies designed by governments to regulate the use of these technologies, and curb the misuse, wherever needed.
But some say our tendency to worry about AI sentience is itself a psychological tick. A Guardian article spoke of our innate predisposition to anthropomorphize – namely our need to ascribe human sentiments and emotions to non-humans – which could range anywhere from plants and animals, to objects and AI. We name our soft toys, build emotional attachments, and sometimes even fall in love with our virtual assistants (ahem, Spike Jonze). AI, then, may seem more sentient than it really is because of how we perceive it.
But the sentience debate also began to raise ethical questions, such as: is it ethical to kill AI once it’s sentient? Some might argue that to kill an AI would be to erase its consciousness from everywhere, thus ending its free thought, which is the same as terminating a human body to destroy their human consciousness.
There is also similar discourse on the inherent biological hierarchy present on Earth, where humans are at the top due to their perceived intelligence, and thus consequently of higher value. A human life is worth more than that of an animal, and thus murder is judged with more scrutiny if a human kills another. In this case, since we created artificial intelligence, are we not higher up in the hierarchy than AI, and thus reserve the right to end its life?
A blog by Jonathan Haddock, an IT professional, puts forth an interesting angle to our dilemma: “On the one hand the human created the AI, clearly more capable, on the other hand the AI has equivalent intelligence to its human creator, can hold a conversation, is capable of independent thought and can make its own choices.”
Our concerns with AI sentience are intensifying due to recent developments. But AI gaining sentience, if it does, is still a faraway peril, and there are more real and more critical affairs that require immediate attention.
AI experts dismiss Lemoine’s views, saying that even the most advanced technology falls short of creating a free-thinking system and that he was anthropomorphizing a program.
“We have to remember that behind every seemingly intelligent program is a team of people who spent months if not years engineering that behavior,” said Oren Etzioni, CEO of the Allen Institute for AI, a Seattle-based research group.
“These technologies are just mirrors. A mirror can reflect intelligence,” he added. “Can a mirror ever achieve intelligence based on the fact that we saw a glimmer of it? The answer is of course not.”
The late Stephen Hawking predicted that future developments in AI “could spell the end of the human race.” We may not have reached that point yet, but we sure aren’t slowing down.
Hetvi is an enthusiast of pop culture and all things literary. Her writing is at the convergence of gender, economics, technology and cultural criticism. You can find her at @hetviii.k.