recent

5 actions to elevate customer experience in physical retail

Scope 3 emissions top supply chain sustainability challenges

3 ways to improve the mortgage market

Credit: Luca Lorenzelli / Shutterstock

Ideas Made to Matter

Diversity

3 ways to make technology more equitable

By

About 50 years ago, during the height of the civil rights movement and as policymakers started to explore how to use computing, there was an opportunity to align the future of technology and the pursuit of racial justice. Some civil rights leaders proposed using technology to address racial bias and to make sure Black and Indigenous people benefited from innovation.

“From where I sit, we chose the wrong path,” said Charlton McIlwain, a New York University professor and author of “Black Software: The Internet and Racial Justice, from the AfroNet to Black Lives Matter.”

Today opportunities to use, develop, and profit from technology are mostly available to white and wealthy people, he said, or those who have connections to networks of capital. Technology has also exacerbated bias through things like facial recognition, predictive policing, and bail predictions.

Deliberate action is required to redirect the way technology accounts for and perpetuates racism, McIlwain said at the recent EmTech MIT conference hosted by MIT Technology Review. Today this includes addressing past missteps along with reframing for the future.

“How do we overcome discriminatory design when it is hard-coded into the very infrastructure that shapes future technological innovations?” McIlwain asked.

McIlwain presented three strategies to redirect technology down the right path.

Engage race

Anyone working in technology — engineers, data scientists, policymakers, regulators, and those who market and sell technology — needs to embrace a new way of thinking, McIlwain said. 

“We must deeply engage the historical and present realities of race, racial discrimination, and the disparate impacts of race that play out in virtually every social domain,” he said.

In some cases, promising technology ended up perpetuating bias because it did not account for historical context. For example, mortgage lending systems based on artificial intelligence were built in part to avoid bias in face-to-face lending decisions. But they ended up charging Black and Hispanic borrowers higher rates than white borrowers for the same loans, according to a study that McIlwain discussed in a recent MIT Technology Review article. They did so because the algorithms were focused on maximizing price. The people who designed the system did not factor in the ways Black and Hispanic borrowers have been discriminated against in housing and credit markets through practices like redlining, predatory lending, and blockbusting, McIlwain said.

In that article, University of California Berkeley professor Bobby Bartlett, who led the mortgage study, said the problem likely stemmed from lack of awareness. 

“Presumably, whoever is designing the algorithm is unaware of the racial consequence of this single-­minded focus on profitability,” Bartlett said.

Going forward, understanding and addressing the ways racial discrimination plays out should be the focus for those designing these types of systems, McIlwain said.

Ask different questions

The questions people ask technology to solve play an important role in how that technology addresses race, McIlwain said. Better technology starts with asking new questions.

In the 1960s, when technology and computer systems were nascent, there were different views of how they could be used to address societal issues. At that time, Roy Wilkins, a journalist and civil rights leader, argued that Black people should have a stake in the computing revolution, and that advances in computing could be used to look at race through an unbiased lens, instead of furthering systemic racism.

But instead of asking how computing could address racism, President Lyndon Johnson asked a new commission on law enforcement and justice to examine, among other topics, how computing technology could be used to address crime, McIlwain said. The commission’s report advocated for automated criminal justice information systems, he said, which targeted and disenfranchised Black people through things like predictive policing, facial recognition technology, and risk-scoring systems.

“No technologists ever seem to pursue the questions that Wilkins asked,” McIlwain said. “I dare say that if we had started the computing revolution of the 1960s pursuing Wilkins' questions, we would find ourselves in a much different situation today.”

Don’t make people your problem

The next step, closely related to step two, is thinking critically about the problems we enlist technology to solve, McIlwain said. In the 1960s, crime had been portrayed as a problem of non-white people “who President Johnson and the country had already identified as a predominant cause and perpetrators of crime," he said.

“If we don't want our technology to be used to perpetuate racism, then we must make sure that we don't conflate social problems like crime or violence or disease or delinquency and the like with Black and brown people,” McIlwain said. “When we do that, we risk turning those people into the problems that we deploy our technology to solve, the threat we designed it to eradicate.”

Some technology companies today have a core mission of creating more equitable technology and employ these three strategies. McIlwain pointed to Parity, an AI startup that has developed a platform to mitigate bias in algorithmic systems, using data from communities at risk of being negatively impacted. The company has also developed an advisory service called the Algorithmic Advisory Alliance in collaboration with subject matter experts from the NYU Alliance for Public Interest Technology.

Parity's mission is rooted in the community of social scientists, anthropologists, and others who engage issues of race, systemic racial inequality, and the disparate impacts of technology, McIlwain said. They are motivated by the question of how to mitigate the potentially disparate impacts of AI and algorithms, and the problem they are tasked with solving is technological systems that are ill-equipped to mitigate that bias on their own.

“Can we imagine, design, and deploy technology that will maximize a potential for racial equity, equal opportunity, and justice? Yes, we can. But to do so will require that we rethink how we identify and frame racial problems,” McIlwain said.

For more info Sara Brown Senior News Editor and Writer