Google scientists reportedly told to make AI look more ‘positive’ in research papers – CNET

Google headquarters in Mountain View, California

Google is reportedly telling research scientists to spin AI in a more positive way.


Stephen Shankland/CNET

Google parent Alphabet has been asking its scientists to ensure that AI technology looks more “positive” in their research papers, says a Wednesday report by Reuters. A new review procedure is reportedly in place so researchers consult with Google’s legal, policy or PR teams for a “sensitive topics review” before exploring things like face analysis, and racial, gender and political affiliation.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” one of the internal webpages on the policy says, according to Reuters.

Read more: Google CEO apologizes for handling of AI researcher Timnit Gebru’s departure

Other Google authors were told to “take great care to strike a positive tone,” internal correspondence shared with Reuters said.

The report follows Google CEO Sundar Pichai earlier this month apologizing for the handling of artificial intelligence researcher Timnit Gebru’s departure from the company and saying it would be investigated. Gebru left Google on Dec. 4, saying she’d been forced out of the company over an email sent to co-workers.

The email criticized Google’s Diversity, Equity and Inclusion operation, according to Platformer, which posted the full text of her missive. Gebru said in the posted email that she’d been asked to retract a research paper she’d been working on, after receiving feedback on it.

“You’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything,” she wrote in the email. “You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored.

“Silencing marginalized voices like this is the opposite of the NAUWU principles which we discussed. And doing this in the context of ‘responsible AI’ adds so much salt to the wounds,” she added. NAUWU stands for “nothing about us without us,” the idea that policies shouldn’t be made without input from the people they affect. 

Google didn’t immediately respond to a request for comment.

Leave a Reply

Your email address will not be published. Required fields are marked *