recent

5 actions to elevate customer experience in physical retail

Scope 3 emissions top supply chain sustainability challenges

3 ways to improve the mortgage market

Credit: Rob Dobi

Ideas Made to Matter

Artificial Intelligence

The hidden work created by artificial intelligence programs

By

Artificial intelligence is often framed in terms of headline-grabbing technology and dazzling promise. But some of the workers who enable these programs — the people who do things like code data, flag pictures, or work to integrate the programs into the workplace — are often overlooked or undervalued.

“This is a common pattern in the social studies of technology,” said Madeleine Clare Elish, SM ’10, a senior research scientist at Google. “A focus on new technology, the latest innovation, comes at the expense of the humans who are working to actually allow that innovation to function in the real world."

Speaking at the recent EmTech Digital conference hosted by MIT Technology Review, Elish and other researchers said artificial intelligence programs often fail to account for the humans who incorporate AI systems into existing workflow, workers doing behind-the-scenes labor to make the programs run, and the people who are negatively affected by AI outcomes.

“This is a challenge for ethical AI, because overlooking the role of humans misses what’s actually going on,” Elish said.  

Here are their insights about making AI systems more effective and more ethical.

Consider how AI will be integrated into a workplace

In her previous job leading the AI on the Ground initiative at Data & Society, Elish and Elizabeth Anne Watkins, SM ’12, studied an initiative by Duke University and Duke Health System called Sepsis Watch. This is a clinical decision support system that uses AI to predict a patient’s risk of sepsis, a leading cause of death in hospitals that is notoriously difficult to diagnose and treat quickly.

The program has dramatically improved the cases of patients with sepsis, Elish said, but for workers in the hospital, Sepsis Watch was disruptive. It changed the way rapid response nurses and doctors typically communicate, and nurses had to figure out the best ways to relay risk scores to doctors. Nurses also had to fit the Sepsis Watch information into existing emergency department practices.

“This hadn’t even crossed the minds of the tech development team, but this strategy proved essential,” Elish said. “We saw skilled individuals performing essential but overlooked and undervalued work.”

The nurses ended up doing what Elish and her fellow researchers call repair work — the work required to make a technology actually effective in a specific context, and to weave that technology into existing work practices, power dynamics, and cultural contexts.

The way many people focus on the technology and development of AI programs leaves out people doing on-the-ground innovation to make them work.

“So much of the actual day-to-day work that is required to make AI function in the world is rendered invisible, and then undervalued,” Elish said.

Even the language used to talk about launching AI systems tends to discount the importance of this work, Elish said.

“I actually try to avoid talking about ‘deploying systems’,” she said. “Deploy is a military term. It connotes a kind of contextless dropping in. And what we actually need to do with systems is to integrate them into particular context. And when you use words like ‘integrate,’ it requires you to say, ‘Integrate into what, or with whom?’”

In the case of Sepsis Watch, people respected the autonomy of the nurses, and they were allowed the discretion and flexibility to improvise and create ways to communicate about sepsis risk scores, Elish said. Those creating AI systems need to allocate resources toward supporting the people who will be doing this type of repair work, and making sure they are part of the project from beginning to end.

“An AI solution, in theory, doesn’t actually get us very far,” Elish said. “Responsible implementation, effective implementation, comes from focusing on how individuals will be empowered to use the solution in a particular context.”

Don’t forget about ‘ghost workers’ behind the scenes

Repair work to integrate AI into the workplace isn’t the only unseen labor. AI has created millions of new jobs, including for human workers who do things like labeling images so a machine learning model can learn, said Saiph Savage, director of the Civic Innovation Lab at the National Autonomous University of Mexico. Other human tasks might include transcribing audio, which helps voice assistants understand the world, or flagging violent content or disinformation on social media platforms.

The workers operating behind the scenes, often called ghost workers or invisible workers, are usually hidden to the end user, Savage said. Her research shows that these workers often earn below minimum wage and have limited opportunities for career growth and development.

Taking workers into account requires understanding systemic challenges they face, and what values they have. Savage said there are several ways AI programs can be used to help ghost workers, including some tools she’s created:

  • AI programs that can detect when an employer is being unfair to a worker, perhaps through things like negative feedback, and then nudge the employer to reconsider. AI programs can also be used to guide workers to achieve different goals.
  • Studying workers who have been able to grow and thrive, and building programs based on their strategies. “I computationally find those workers and I organize them to share tips and strategies for other workers, so that those other workers can follow [these] strategies and also increase their wages,” Savage said. She has developed web plug-ins that allow workers to share advice.
  • Auditing tools to understand the conditions workers are exposed to — things like hourly wages and invisible labor.

“We really need to think about the risks that we are exposing workers to,” Savage said. “For instance, the amount of invisible labor that we're forcing workers to do.”

Ask who’s not at the table, and whom AI might harm

Abeba Birhane, a PhD candidate in cognitive science at University College Dublin, questioned the assumption that AI is a universally good thing that can solve any problem. AI and algorithmic systems “carry actual, tangible consequences to real people, whether it’s in policing or in the health care system,” she said.

A recurring theme across AI tools is that “individuals in communities that are at the margins of society, people who are the most vulnerable, are always the ones who pay the highest price,” Birhane said.

Things like facial recognition systems, health care algorithms, and privacy violations tend to disproportionately affect and disadvantage Black and transgender people, immigrants, and LGBTQ children.

Related Articles

Human-centered AI fights bias in machines and people
3 ways to make technology more equitable
Machine learning, explained

“If we ask who benefits, we find that people who are creating and deploying these systems benefit, while the cost falls heavily on the marginalized,” she said.

People creating artificial intelligence systems tend to be from privileged backgrounds she said, and are often ill-equipped to understand potential problems and provide solutions. (For example, AI programs used to look for welfare fraud have had inaccurate and disastrous results for some welfare recipients.)

Even conferences like EmTech, she pointed out, are aimed toward company presidents, CEOs, and directors.

“The idea of including or thinking of the end user, or those impacted as the stakeholders, seems a bit of a radical stance,” Birhane said, but those developing AI should consider talking to people in communities where the technology is going to be used.

To be ethical, AI should benefit the most marginalized and the most impacted, Birhane said. This starts with recognizing unjust structural and social power dynamics and hierarchies that benefit some and marginalize others, and thinking of ethics and fairness as concrete and urgent matters.

Companies should also think of ethics as an integral part of the process, from ideation to deployment, rather than an add-on.

“It's an ecology, it's a culture, it's a habit, it's a safe and supportive environment that is not hostile to critique, but that actively seeks critique,” Birhane said. 

For more info Sara Brown Senior News Editor and Writer