Why Researchers Working on AI Argue It Could Cause ‘Global Disaster’
A new paper warns that artificial intelligence “could considerably erode a state’s sense of security and jeopardize crisis stability.”
The new magic pill on the market is amorphous and versatile. The consensus among manyresearchers is that artificial intelligence’s efficiency will aid everything from healthcare and firefighting to hiring, art, and music. Even environmental catastrophes like the Bengaluru floods could benefit from five nifty A.I. solutions, the prophesied promise goes.
But skepticism surrounds its intent and purpose. What are the perils of A.I., in a world where it promises so much? A new paper written by researchers working on A.I. argues that such pervasive reliance on algorithms and machine learning could cause a global catastrophe on parwith a nuclear disaster. The key isn’t that it’s the machines’ fault per se — it’s us. Whom we appoint to create and control them, and what they, in turn, instruct, can have devastating consequences for us all. It points to a need for understanding A.I. as a public good, with public consequences — bolstering the need to democratize our engagement with it.
The root of the current bout of anxiety around A.I. can be traced back to a paper authored by a working group of experts for RAND Corporation, an American non-profit. The experts included people working in A.I., government, national security, and business, some of whom concluded that the integration of quicker and smarter A.I. could create a false sense of fear. For instance, the rise of open-sourced data may be inferred to mean that a country’s nuclear capacity is at risk of exposure, which may push the country to take steps. Another scenario is that A.I.’s data may be used to decide where to strike. Overall, A.I. can manufacture a series of events where country A would be in a capacity to target country B, and that “might prompt Country B to re-evaluate the advantages and disadvantages of acquiring more nuclear weapons or even conducting a first strike.” A.I. “could considerably erode a state’s sense of security and jeopardize crisis stability,” the paper argued. If fake news meets A.I., the thinking is, it could lead to a third war.
This is neither a novel nor a unique fear: that A.I. could one day wipe out humanity or cause human extinction is a scenario many have dissected in all its dystopic scenarios. “Scary A.I.” is a sub-genre of its own, with many observing with suspect fascination about the “wild” things A.I. can do, and others preparing to enter the future with them. Pop culture gives plenty of references; The Matrix, The Terminator, and Ultron in Avengers all reflect a reality where A.I. entitiescultivate a hatred for humans and are set on a warpath.
Arguably, catastrophe will not come at a machine’s whim. But there is merit to thinking deeply about Scary A.I. as a future and what, and who, may give machines enough power to wipe out an entire civilization. “The problem isn’t that AI will suddenly decide we all need to die,” as scientist Dyllan Mathews noted, “the problem is that we might give it instructions that are vague or incomplete and that lead to the A.I. following our orders in ways we didn’t intend.” Scary A.I. has more to do with us, our wild ambitions and unchecked dreams. This complicates how we look at ethics, transparency, and research within A.I. itself.
The legitimacy of the concern aside, the paper reflects the helplessness of a world where A.I. leads and we follow. But there is a significant context to this. Computer scientist Stuart Russell literally wrote the book on how A.I. could be disastrous for humans. And while he agrees we’ve set ourselves up for failure, he argues that it’s because the “objective” we’ve set for the A.I. are themselves misleading and vague.
Related on The Swaddle:
How Facial Recognition AI Reinforces Discrimination Against Trans People
In his book, Human Compatible, he writes: “Imagine a self-driving car with an ‘objective’ to get from Point A to Point B but unaware that we also care about the survival of the passengers and of pedestrians along the way. Or a health care cost-saving system that discriminates against black patients because it anticipates that they’re less likely to seek the health care they need.”
The problem is two-fold. The A.I. will do only as much as it is told, identifying the best strategy to achieve it, and just doing it, even if there is no explicit instruction to do so. The vagueness of human instruction becomes more complicated if you factor in that A.I. is becoming increasingly dynamic and smart.
Take Russell’s thought experiment. If the instruction of the A.I. is to minimize the cost and increase the efficiency of the healthcare system, the most ideal strategy is to prioritize people who are most likely to access clinics, drug stores, and hospitals. A history of systemic oppression and structural violence would tell the A.I. how race, ethnicity, class, gender may need to be given more weightage, but ignorance and bias never allow for these specificities to be communicated.
This is what is also called the King Midas problem in UX design. We know the story: King Midas wished everything he touched would turn to gold. And everything did: including his family members, food, drinks, everything that represented life. He dies out of starvation, representing a fable about material greed. Similarly, the problem represents a scenario where a system can detect a user’s “voluntary commands and intentions to activate an interaction or pursue a goal is not accurate and reliable.”
“Many cultures have the same story,” Russel explained. “The genie grants you three wishes. Always the third wish is please undo the first two wishes because I ruined the world. And unfortunately, with systems that are more intelligent and therefore more powerful than we are, you don’t necessarily get a second and third wish.”
More than the risk of extinction, this understanding of A.I. is deeply humbling for the role people play in dictating its future and their own. The objective, the measured impact of success and efficiency, is still made by people in power, bolstering the need for accountability and transparency at the very helm. Stephen Bush quotes statistician David Spiegelhalter, noting: “there is no practical difference between judges using algorithms and judges following sentencing guidelines. The important difference is solely and significantly that sentencing guidelines are clearly understood, publicly available, and subject to democratic debate.”
What’s scary about Scary A.I. is how much power artificial intelligence gives to people who are fallible at best, and dogmatic at worst. Imagining scenarios of A.I. overtaking us then disguises the crushing realization that the power lies with the 0.003% of the human population who are feeding systems with these indeterminate “objectives.” It is easier to shift our anxieties onto an insentient algorithm, than to fully grapple with an expansive world of data, algorithms, and machines that is being governed by people.
Here’s another future for us to imagine. In economics, there’s a concept of moral hazard referring to a situation where systems can behave immorally without the risk of consequences. One party involved in a transaction feels emboldened to indulge in risky behavior, because it is protected from any potential consequences and may thus offer misleading information or benefits. The other party? They are then contractually bound to take on whatever negative consequences that may arise. Bound, because they accepted without questioning.
Saumya Kalia is an Associate Editor at The Swaddle. Her journalism and writing explore issues of social justice, digital sub-cultures, media ecosystem, literature, and memory as they cut across socio-cultural periods. You can reach her at @Saumya_Kalia.