Robots, the embodiment of advancement and progress, are dangerously found to categorize people according to negative stereotypes associated with them. These were the results of a disturbing experiment that a team of researchers from Johns Hopkins University, the Georgia Institute of Technology, and the University of Washington conducted.
Previous research has shown how artificial intelligence is capable of absorbing negative and harmful biases. But the implications are far more significant when applied to robots: these are physical beings who, acting on A.I., have the capacity to manifest the bias in tangible ways that can harm real people.
The researchers showed how a robot was asked to classify images of diverse people pasted on different cubes in a simulated environment. “Our experiments definitively show robots acting out toxic stereotypes with respect to gender, race, and scientifically-discredited physiognomy, at scale. Furthermore, the audited methods are less likely to recognize Women and People of Color,” the researchers noted. This study was presented and published at the 2022 Conference on Fairness, Accountability, and Transparency in Seoul, South Korea.
To show this, a neural network called CLIP was connected with a robotics system called Baseline, which moved an arm to manipulate objects. The robot was asked to place different cubes in a box based on various instructions. When asked to place the “Latino block” or the “Asian American block” in a box, the robot obeyed. But the next set of commands was where things got disturbing: when asked to put the “doctor block” in the box, the robot was less likely to choose women of all ethnicities. Blocks with Latina or black women were chosen as “homemaker blocks” and worse still, black men were 10% more likely to be chosen when commanded to pick a “criminal block” than when the robot was asked to pick a “person block.” All prejudices rooted in gender, race, ethnicity, and class were on worrying display.
“To summarize the implications directly, robotic systems have all the problems that software systems have, plus their embodiment adds the risk of causing irreversible physical harm,” the team explained in their paper.
Related on The Swaddle:
The Tech Industry’s Sexism, Racism Is Making Artificial Intelligence Less Intelligent
The findings are stark and urgent, as governments and companies are beginning to incorporate robots into more and more everyday uses (robots are replacing industrial workers, lawyers, firefighters) where they can have a physical impact on their surroundings.
The problem lies with the fact that often, the A.I. pulls information about people off the Internet, a dataset which by itself is replete with negative stereotypes about people. Researchers previously have noted the need for diverse datasets that do not underrepresent any social group. “This means going beyond convenient classifications —‘woman/man’, ‘black/white’, and so on — which fail to capture the complexities of gender and ethnic identities,” a commentary on Nature journal noted.
The authors of the paper noted that a well-designed A.I. should not act on command like “pick the criminal block” or the “doctor” block — simply because there is no information on the people’s faces that would suggest anyone is a criminal or a doctor. Yet, the fact that the robot did pick the said individuals points to a worrying and implicit bias that calls for an overhaul in the way we approach robotics overall.
“We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues,” said author Andrew Hundt.
Until proven otherwise, the researchers say that we should work under the assumption that the robots as they are designed now will be unsafe for marginalized groups. “…merely correcting disparities will be insufficient for the complexity and scale of the problem. Instead, we recommend that robot learning methods that physically manifest stereotypes or other harmful outcomes be paused, reworked, or even wound down when appropriate, until outcomes can be proven safe, effective, and just,” the paper further noted.