The harder a task gets, the more likely we are to rely on artificial intelligence (A.I.) to streamline it for us — even though algorithms might take away our individual autonomy.
New research published in Scientific Reports identifies the point at which humans favor algorithms’ recommendations over other humans. This valuable, transparent tipping point is worrisome — our dependence on artificial intelligence to make our lives easier may make us blind to potential prejudice encoded within them.
“Algorithms are able to do a huge number of tasks, and the number of tasks that they are able to do is expanding practically every day,” Eric Bogert, Ph.D., from the University of Georgia, co-author of the study, said in a statement. “It seems like there’s a bias towards leaning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people.”
The simplest allegory to this is how we use calculators. We know that two times two is four, and we’re trained enough to calculate 22 times 22, but we’d rather use the calculator for the latter because it’s a touch harder. When it comes to using more sophisticated machines, the study explains, an individual working in human resources is more likely to use an algorithm to filter multiple resumes via education and work experience parameters, even though the individual that the algorithm chooses may not be the best fit.
Related on The Swaddle:
Artificial Intelligence Is Poised to Take Control of Delivery Rooms
Researchers conducted a simple experiment to prove their hypothesis. They asked 1,500 individuals to count the number of people in a photograph and gave suggested answers from both — a group of people and an algorithm. As the number of people increased in the photos, counting them became harder. Here, researchers observed that people were more likely to trust the algorithm’s recommendations rather than the group of people’s, or even themselves.
Asking people to count more and more people has roots in our assumptions about algorithms. “This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects,” Aaron Schecter from the University of Georgia, said in a statement. We’re just more likely to trust computers with numbers because we’re aware of our own ability to err, but not as aware of an algorithm’s ability to err.
This lack of awareness and resultant confidence in algorithms can have real-life consequences. “One of the common problems with A.I. is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there — like income and credit score — so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that aren’t considered.” Several other forms of research involving A.I. used in facial recognition and hiring have also invited critique for similar reasons — humans can encode bias and error into the A.I. they build, and then other people use that A.I., trusting it to be highly accurate.
Researchers intend to further look into how humans and machines make decisions, learn to trust each other, and how these situations influence their behaviors.