fbpx

How Facial Recognition AI Reinforces Discrimination Against Trans People

By

Sep 14, 2022

Share

Image Credit: The Newyork Times/A for the Swaddle

Soon, passengers flying from seven Indian cities – Delhi, Bengaluru, Hyderabad, Varanasi, Vijayawada, Kolkata, and Pune – will be able to board their flights without providing boarding passes or physical identity cards, courtesy DigiYatra, a facial recognition technology (FRT) ecosystem from the Ministry of Civil Aviation. 

While several people are lauding the Ministry’s move to save passengers’ time at airports, data privacy experts have sounded an alarm about the possible consequences of such technology. Speaking to FirstPost, Anushka Jain of the Internet Freedom Foundation cautioned against the lack of clarity over how the data collected by the airports through DigiYatra will be used or shared. 

In the wrong hands, FRTs can act as systems of discrimination and control. These worries are especially compounded for transgender, gender non-conforming and gender non-binary persons, who not only are more vulnerable to discrimination and but also lack the tools to fight back.

Misgendering 

Patruni Chidananda Sastry, a Hyderabad-based software employee and drag artist who identifies as gender non-conforming, was palpably worried after hearing of DigiYatra and the use of FRT. “What if I am wearing makeup and the technology cannot match my face to a picture in my ID card that was clicked long ago?” they ask.

Sastry is not the only one worried. Internet Freedom Foundation’s Jain expressed similar concerns in the FirstPost report: that government identity cards often have old photos of people. “There are high chances that the photo might not match the person’s current facial features,” she said.

In fact, most transgender persons in India do not even have identity cards that reflect their gender accurately. While the National Portal for Transgender Persons has begun rolling out transgender certificates and identity cards – which can be later used to update other identity cards – a Scroll report mentions that only 9,064 people have applied on the portal, as opposed to nearly 5 lac transgender persons recorded in the 2011 population census. Of the 9,064 applications, several are pending, and many have been rejected. Further, Sastry added, it may take a person up to six months to obtain a transgender certificate.

“The fact that my gender expression has to be aligned with what’s on the [ID] card restricts my freedom! Transness cannot be depicted in one certain way.”

Patruni Chidananda Sastry

Sastry’s concern is well founded. In 2019, researchers at the University of Colorado, Boulder, found that FRTs from some of the world’s leading companies are prone to misgendering transgender individuals. While the programs could identify the gender of cisgender women and men with an accuracy of 98.3% and 97.6%, transgender men were misgendered 38% of the time.

Notably, there was not a single instance when these algorithms could identify the gender of non-binary, agender, and genderqueer individuals. The reason for this massive failure was simple – these technologies continued to see gender as binary. 


Related on The Swaddle:

Why We Really Shouldn’t Be Training AI to Decipher Facial Expressions


“As our vision and our cultural understanding of what gender is has evolved, the algorithms driving our technological future have not. That’s deeply problematic,” Jed Brubaker, an assistant professor of information science at the University of Colorado, Boulder, and a senior author of the above-mentioned study, remarked.

New technologies, Old biases

An outdated understanding of gender is not the only concern plaguing FRTs. Concerns also loom on the ethics of FRT research and use, and how researchers and authorities using such technology negatively view transgender persons.

In 2017, The Verge reported that FRT researchers at the University of North Carolina, Wilmington (UNCW), had collated a controversial database with over a million images from videos of transgender persons documenting their medical transition online. This database, called the “HRT [Hormone Replacement Therapy] Database,” took photographs from YouTube videos of transgender persons documenting their transition without taking informed consent from the people. 

Karl Ricanek, a professor at UNCW who was the critical force behind the HRT database, used the excuse of border threats and terrorism to justify the research. “What kind of harm can a terrorist do if they understand that taking this hormone can increase their chances of crossing over into a border that’s protected by face recognition? That was the problem that I was really investigating,” he told The Verge.

Ricanek eventually apologized for pursuing this research. He clarified that the research team did not share the database with anybody for commercial purposes and that he had stopped giving other researchers access to the database in 2014.

However, an independent audit by trans-identifying researchers Os Keyes and Jeanie Austin revealed that the HRT database incident treaded murkier waters than previously reported. Briefly, despite their apology, Ricanek’s team had continued to share until 2015 the database and the videos it used, even though some of the videos had already been removed from the public domain by their creators. Further, Keyes and Austin also discovered that the videos used to build the HRT database were left in an unprotected Dropbox account until the duo made contact with Ricanek and the UNCW in 2021.

In a conversation with The Swaddle, Keyes termed the episode a “scandal.”


Related on The Swaddle:

The Government Is Testing an Aadhaar‑Based Facial Recognition System for Tracking Covid19 Vaccination


While countering border threats and terrorism might seem like a valid justification for Ricanek’s database, Keyes and Austin believe that such a line of arguing “[mirrors] more general transphobic tropes – that transgender people are suspect, sneaky, and otherwise engaged in acts of trespass and subterfuge.”

Mridul, a Mumbai-based technologist who works on machine learning and neural networks (technologies that make FRT possible), and identifies as a transman, agrees with Keyes and Austin. “Just by virtue of being trans, we are perceived as people who can only be terrorists and thieves. We can only be people who are either punished or need rehabilitation,” he says.

Further, the HRT database’s attempt to use FRT to identify individuals undergoing transition also reduces gender to another binary – that of “pre-” and “post-transition.” However, demarcating periods of a person’s life as “pre-” and “post-transition” is extremely difficult since transitioning is a non-linear and continuous process.

Recognizing Faces, Curbing Freedom

Sastry also thinks the use of FRT by the government is “problematic” because of the fact that such technologies are routinely used to surveil people and curb their freedom of expression.

This is especially pertinent in light of the fact that several Indian cities – Indore, Hyderabad, Delhi, and Chennai – are among the world’s most surveilled places. With about 600,000 cameras in action, Hyderabad is on the “brink of becoming a total surveillance city,” a report from Amnesty International claims. 

Further, according to a Reuters report, Delhi and Uttar Pradesh authorities have used FRT during protests against the Citizenship (Amendment) Act (CAA) 2019.

As authorities in cities like Delhi and Hyderabad continue to use FRTs to surveil and police their citizens, Mridul reminds The Swaddle that FRTs are not “foolproof,” which means that even the best FRT will have a margin of error. Essentially, if a transgender person is wrongly identified or accused based on an FRT, it leaves little room for recourse. 

“How do you reason with a machine, especially when you are both prone to suspicion – as transgender persons are – and have little power to fight back?”

Mridul

Uncharted and Unregulated Waters

FRT world over has been met with widespread public discomfort. For example, according to a survey by the Ada Lovelace Institute, most of the respondent population in the United Kingdom wanted restrictions on the use of FRT by the police. Nearly half of the respondents wanted the right to opt out of FRT. Similar concerns have been raised in China, the Nature report mentions.

However, there continue to be practically no regulations on FRT use. 

In India too, “There is currently no legislation in place to protect the privacy of citizens,” according to Jain. This is despite a 2017 ruling from the Supreme Court of India that recognized the Right to Privacy as a fundamental right of Indian citizens.

The lack of regulations to deter the unethical use of FRT has led to several organizations calling for a ban on their use. For instance, in 2020, several organizations, including the Washington DC-based National Center for Transgender Equality, petitioned the US government’s Privacy and Civil Liberties Oversight Board to stop the government from using FRTs.


Related on The Swaddle:

New Research Links Social Bias to How People Recognize Faces


In 2021, SQ Masood, a social activist, sued the state of Telangana for its indiscriminate use of FRT. Amnesty International has also been asking for a ban on the use of FRT through its “Ban the Scan” campaign.

Manipulating Machines to Fight Back

However, despite these global conversations against the misuse of FRT, they can no longer be wished away. Per a Nature report, 64 countries have been using FRTs for surveillance as of 2019.

The queer-trans community is thus finding ways to fight back by “trying to manipulate the machines,” says Sastry.

They talk about how queer-trans activists who were a part of the protests against CAA used makeup to evade facial recognition. “There are certain makeup techniques, which ensure that a person’s identity is not revealed if they are being surveilled.”

What Sastry is referring to is “anti-surveillance makeup.” A notable example is Computer Vision Dazzle, aka CV Dazzle, introduced by artist Adam Harvey in 2010. According to the CV Dazzle website, the project uses “avant-garde hairstyling and makeup designs to break apart the continuity of a face. Since facial-recognition algorithms rely on the identification and spatial relationships of key facial features, like symmetry and tonal contours, one can block detection by creating an ‘anti-face’.”

Simply put, the accuracy of FRTs depends on how well these technologies can map facial features and the distance between them. By introducing more facial features through makeup, projects like CV Dazzle affect the accuracy of these technologies. However, according to a Vogue report, anti-surveillance makeup is by no means “foolproof.”

“We can’t always do it. Especially when we are in a vulnerable situation, or when we are doing our day-to-day activities,” explains Sastry with understandable frustration.

While Mridul also shares these concerns and continues to be critical of intrusive FRTs, he also believes that the ethical use of machine learning – the technology that makes FRT possible – can have transformative possibilities. For example, he cited one of his projects that harnesses neural networks to create assistive technology for persons with neuromuscular disabilities.

“The whole idea of privacy needs much more thought,” Mridul points out, “We need to draw a line on how much data one can collect, and we need to enable people from whom the data is being collected to have a say in what data is being collected and how it is being used.”

Share

Written By Sayantan Datta

Sayantan Datta is a queer-trans science writer, communicator, and journalist. You can find them on @queersprings on Twitter and @prabhasvaramitra on Instagram.

Share

Leave a Comment

Your email address will not be published. Required fields *.

The latest in health, gender & culture in India -- and why it matters. Delivered to your inbox weekly.