Open Letters by Tech Industry, Google Employees Criticize Google’s Lack of Transparency in AI Research
The outcry against an AI expert’s controversial exit from Google is making the rounds on social media, leading to concern about the company’s internal research and review process. It has also led the larger tech industry and scholardom to question whether AI-based research out of Google can be trusted. Now, Google employees themselves are speaking up, both in defense of their colleague and to question the sanctity of AI research carried out by Google.
The controversy surrounds artificial ethics researcher Timnit Gebru, known for her landmark 2018 study that showed dark-skinned women get misidentified by facial recognition software 35% of the time while white men face virtually no trouble at all. Gebru tweeted she was fired last week from Google (Google leadership maintains that she resigned, and wasn’t forced out), where she was the co-leader of the company’s Ethical AI team, because of an email she sent to a few company employees voicing her discontent with Google’s minority hiring policies. The email also delineated the review process she underwent in-house with respect to a paper she had authored outlining bias in AI.
In an open letter posted Monday on Medium, Gebru’s ex-colleagues from Google attempted to set the record straight, especially on the research paper that has become one of the main points of contention in the current controversy. Gebru co-authored a paper questioning the ethics of large language models in AI, which are key to Google’s business. Training AIs with large amounts of language data presents four main problems, the paper outlines: the enormous carbon footprint of such technology; the risk of the AI being trained on large amounts of racist, sexist, and queerphobic language rampant on the internet; the utility of such AI only for wealthy organizations in Western countries; and the ability of such AI to manipulate language and fool human users online. This paper, Gebru’s ex-colleagues write in solidarity (and to corroborate Gebru’s own story), was approved by Google for publication in October, but permission was then retracted in November without any written feedback or warning. This process prompted Gebru’s email, which then became the final straw in her exit from the company.
Related on The Swaddle:
In light of this incident, AI experts are urging academic conferences and journals to stop considering for publication Google’s AI research. Ali Alkhatib, a research fellow at the University of San Francisco’s Center for Applied Data Ethics, writes, “I can’t even begin to imagine what interference people at Google have experienced in the past, are experiencing now, or will experience going forward. … I can’t be confident that any radical recommendation a paper makes hasn’t been carefully pruned by someone at Google who may have actively threatened to ‘resignate’ the author if they didn’t go along with it.”
The shrouded, secretive behind-the-scenes decision-making in this incident has alarmed those inside Google, too. A petition signed by more than 2,000 Google employees and thousands of academics laments Google’s “unprecedented research censorship. … Instead of being embraced by Google as an exceptionally talented and prolific contributor, Dr. Gebru has faced defensiveness, racism, gaslighting, research censorship, and now a retaliatory firing.” They added, “research integrity can no longer be taken for granted in Google’s corporate research environment.”
Sounding alarm for the larger tech industry, the petitioners wrote, “Dr. Gebru is one of the few people exerting pressure from the inside against the unethical and undemocratic incursion of powerful and biased technologies into our daily lives. This is a public service.”