recent

MIT Sloan reading list: 7 books from 2024

‘Energy poverty’ hits US residents more in the South and Southwest

To help improve the accuracy of generative AI, add speed bumps

Credit: Mimi Phan / Kimberly White / Getty Images

Ideas Made to Matter

Artificial Intelligence

Ex-Google researcher: AI workers need whistleblower protection

By

Artificial intelligence research leads to new cutting-edge technologies, but it’s expensive.

Big Tech companies, which are powered by AI and have deep pockets, often take on this work  — but that gives them the power to censor or impede research that casts them in an unfavorable light, according to Timnit Gebru, a computer scientist, co-founder of the nonprofit organization Black in AI and the former co-leader of Google’s Ethical AI team.  

The situation imperils both the rights of AI workers at those companies and the quality of research that is shared with the public, said Gebru, speaking at the recent EmTech MIT conference hosted by MIT Technology Review.

“It’s all the incentive structures that are not in place for you to challenge the status quo,” she said.

Gebru was forced out at Google last December (Gebru said she was fired, while Google said she resigned) after co-writing a paper about the risks of large AI language models, such as environmental impacts and the difficulty in finding embedded biases. Google’s search engine runs on such a large language model.

Citing concerns, Google told Gebru to retract the paper from a conference or remove her name and the name of other Google researchers, according to The New York Times. Gebru refused to so without a fuller explanation from Google, which led to Google announcing her departure.

During her recent talk, Gebru highlighted what she views as the labor rights concerns of AI workers, how to protect them, and why academia isn’t always a better route for researchers. Ultimately, she said, the goal is better and more equitable artificial intelligence.

 

“The moment you push a little hard, you’re out”

Gebru’s research centers on unintended negative impacts of artificial intelligence. A paper she co-authored with MIT Media Lab researcher Joy Buolamwini explored bias in facial recognition algorithms.

After joining Google in 2018, “I had issues from the very beginning,” Gebru said.  She said some people had doubts that she would be able to change a company as large as Google. “I was thinking, ‘Okay, maybe I can carve out a small piece … that is safe for people in marginalized groups,’” she said. “What I learned is that it's impossible, because the moment you push a little hard, you're out. So if you survive, it's because maybe you're not poking … at a thing that they find super important.”

It’s important to hold tech companies accountable from the outside, Gebru said.

“We can't have the current dynamic that we have and expect any sort of nonpropaganda tech to come out of tech companies," she said. "Because when you start censoring research, then that's what happens, right? The papers that come out end up being more like propaganda.”

Problems extend outside of Big Tech  

Since leaving Google, Gebru has been working to develop an independent research institute. While many AI researchers work in academia, Gebru said that in her experience, that avenue poses its own concerns related to gatekeeping, harassment, and an incentive structure that doesn’t reward long-term research.  

There are also concerns about tech companies funding AI research at academic institutions. Gebru cited The Grey Hoodie Project, a research paper by Mohamed Abdalla of the University of Toronto and Moustafa Abdalla of Harvard Medical School. The researchers compared the way Big Tech (large technology companies like Google, Amazon, and Facebook) is funding and leading AI research with how big tobacco companies funded research in an effort to dispel concerns about the health effects of smoking.

“At an independent research institute, you can do research that the company does not think is going to make it money right now. You can do research that really shows fundamental flaws in whatever technology that a company might be using,” Gebru said.

How to improve protection for Big Tech workers

Gebru said she doesn’t argue against researchers working for Big Tech companies, but said they need protection to do their jobs. Otherwise, technology companies can quash findings or research threads that are unfavorable. She suggested three things that could help:

  1. Enhanced whistleblower protection for AI researchers. Recent events have shown the importance of whistleblowers at Big Tech companies, such as the former Facebook data scientist who exposed the company’s internal documents that show Facebook knew how much harm the company was causing.

  2. Anti-discrimination laws. “Often these organizations harm marginalized communities the most,” Gebru said “It's people from marginalized groups who'll see, who'll think of those negative impacts [of AI]. A lot of other people might think, ‘Oh, this is all going to be all great.’… from their standpoint in life, they don't see how it's going to negatively impact people. But after having certain experiences, it might be super clear to you, and you're going to push hard on that angle.”
  3. Labor laws. When Google became involved with a federal program to use AI to potentially improve drone strikes, employees protested, and the company ended up not renewing the contract. “I think that workers are building power, which is great, but we need much stronger labor protection laws in order to allow even AI researchers to organize against things that they see going really wrong,” she said. After Gebru left Google, a letter of support for her was signed by almost 2,700 Google employees and more than 4,300 others.

 

Gebru also advised people working in tech who are battling these issues to build a coalition.

“You can do a lot still without those labor protections,” she said. “If you have a coalition of people around you, you can do a lot.”
 

Related Articles

The hidden work created by artificial intelligence programs
The AI road not taken
Human-centered AI fights bias in machines and people

And that includes research that might be unpopular.

“Try to think of the one thing you can do that pushes the envelope, that's not going to make the companies happy, because that means you're doing the right thing,” she said.

A more equitable vision for AI

As AI technologies become ubiquitous, it is increasingly important to consider who is involved in shaping the future.

“It’s nothing really groundbreaking. I just want to work on research in AI [that’s] rooted in thinking about the perspectives of people in marginalized groups,” Gebru said. “That could be either thinking about research in the future and what kind of technologies should we build, what kind of AI tech research should we do, or critiquing it after it's been built.”

Gebru said she hopes to see artificial intelligence become more task-specific and designed to be used for specific groups of people. Right now, AI is abstract and general, which tends to mean the dominant’s group vision is implemented, marginalizing nondominant groups.

“Let’s look at the most marginalized from the very beginning and let's start with that angle,” she said.

Read more artificial intelligence coverage 

For more info Sara Brown Senior News Editor and Writer