Google wanted AI researchers to 'strike a positive tone' on the company
That's not how ethics works, Google.
What you need to know
- Google began intervening in research papers by asking researchers to "strike a positive tone" when referring to the company's work.
- Former Ethical AI head, Timnit Gebru, was fired for her paper criticizing Google's language models.
- Gebru's paper has since been accepted by an outside scientific group.
Amid Google's already troubling legal battles around how it handles Search and ads, Google still faces internal struggles on its handling of former Ethical AI co-head, Timnit Gebru's departure. After presenting her research paper on the concerns of largely unchecked AI language models, such as those used in Google's search tools and the best Android phones, the company dismissed her paper saying it didn't account for new efforts Google was making to address these concerns, and therefore dismissed Gebru. That's putting it plainly, as the situation has presented itself a bit more complicated, but some information from Reuters shines a brighter light on how Google approached the paper.
According to internal policy changes that went into effect in June, Google began requiring its research staff to consult with legal and PR teams before pursuing certain topics that could be deemed as sensitive. These topics include race, gender, political affiliation, and face analysis, with the policy stating that these "seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues."
"This doesn't mean we should hide from the real challenges."
Adding to this, researchers were asked that they "take great care to strike a positive tone." According to employees at Google Research, papers that were openly critical about Google's practices faced a much higher level of scrutiny than those that were not.
"It's being framed as the AI optimists and the people really building the stuff [versus] the rest of us negative Nellies, raining on their parade," said one outside AI expert. This is where Gebru's paper comes in.
Since 2016, the tech industry had grown more concerned over the ethical practices surrounding how AI is used. Google created the Ethical AI group in 2018, around the same time that it recruited Gebru with the promise of total academic freedom. Already pretty well regarded in her field, she took the position and, with the help of University of Washington linguist, Emily M. Bender, presented a draft on the paper that later led to her dismissal.
Google maintains that it did not fire her, but "accepted her resignation." But the outcry from Google's employees, as well as U.S. Senators, ask for full details on an internal investigation on the matter. Gebru's paper has since been accepted by the Conference on Fairness, Accountability, and Transparency.
Get More Pixel 4a
Google Pixel 4a
No comments