recent

MIT Sloan reading list: 7 books from 2024

‘Energy poverty’ hits US residents more in the South and Southwest

To help improve the accuracy of generative AI, add speed bumps

Ideas Made to Matter

Artificial Intelligence

5 steps to ‘people-centered’ artificial intelligence

By

As companies double down on business initiatives built around technologies like predictive analytics, machine learning, and cognitive computing, there’s one element they ignore at their peril — humans.

That was the message from a pair of experts at a recent MIT Sloan Management Review webinar, “People-Centered Principles for AI Implementation.” As organizations push forward on their artificial intelligence  journey, they should strive to put individuals at the center of the design process, the experts advised.

“When we talk about people-centered, it is really the idea that AI technology should be amplifying human strengths,” explained David Bray, executive director of the People-Centered Internet coalition and former CIO of the FCC. “It’s about providing and informing [systems] with data to allow people more opportunities in their work.”

A natural progression

For all the hype surrounding AI, the technology is hardly a newcomer. Early derivatives surfaced in the 1970s and 1980s in the form of decision-support and expert systems. Today, organizations are progressing from a foundation of big data and predictive analytics to machine learning, neural networks, and eventually to fully autonomous AI.

Fueled by the rise of cloud computing, there is now ample memory, storage, and computational horsepower to handle sophisticated algorithms that were developed in the past but not put to use due to the limitations of technology, said Bray, who is also a senior fellow at the Florida Institute for Human & Machine Cognition.

Companies have been collecting data and are now in the process of transforming that data into insights that will empower more informed decision-making. “We go from data to information flows to insight, and then [from] insight to action,” he explained. “That’s the data-decision paradigm we’re aiming for.”

While AI has cycled through periods of heated interest and winters of stagnation, the time has come for all to get serious about advancing people-centered AI initiatives to stay abreast of the competition, said Bray and his co-presenter, R. “Ray” Wang, CEO of Constellation Research. The pair provided this roadmap for getting started:

1. Classify what you're trying to accomplish with AI

Most organizations are pursuing initiatives to do the following:

  • Automate tasks with machines so humans can focus on strategic initiatives.
  • Augment — applying intelligence and algorithms to build on people’s skill sets.
  • Discover — find patterns that wouldn’t be detected otherwise.
  • Aid in risk mitigation and compliance.

2. Embrace three guiding principles

Transparency. Whenever possible, make the high-level implementation details of an AI project available to all involved. This will help people understand what artificial intelligence is, how it works, and what data sets are involved.

Explainability. Ensure employees and external stakeholders understand how any AI system arrives at its contextual decisions —specifically, what method was used to tune the algorithms and how decision-makers will leverage any conclusions.

Reversibility. Organizations must also be able to reverse what deep learning knows: The ability to unlearn certain knowledge or data helps protect against unwanted biases in data sets. Reversibility is something that must be designed into the conception of an AI effort and often will require cross-functional expertise and support, the experts said.

3. Establish data advocates

When it comes to data, the saying, “garbage in, garbage out” holds. Some companies are installing chief data officers to oversee data practices, but Bray and Wang said that’s not enough.

The pair suggested identifying stakeholders across the entire organization who understand the quality issues and data risks and who will work from a people-centered code of ethics. These stakeholders are responsible for ensuring data sets are appropriate and for catching any errors or flaws in data sets or AI outputs early.

“It’s got to be a cavalry — it can’t be relegated to just a few people in the organization,” Bray said. One approach the experts suggested is to appoint an ombuds function that brings together stakeholders from different business units as well as outside constituents.

4. Practice “mindful monitoring”

Creating a process for testing data sets for bias can help reduce risk. Bray and Wang suggested identifying three pools of data sets: Trusted data used to train the AI implementation; a queued data pool of potentially worthwhile data; and problematic or unreliable data. And data should be regularly assessed — for example, whether previously approved trusted data is still relevant or unreliable, or if queued data has a newfound role in improving the existing pool of trusted data for specific actions.

5. Ground your expectations

Managing expectations of internal and external stakeholders is crucial to long-term success. To gain consensus and keep focus on a people-oriented AI agenda, organizations should ask and answer such questions as: What is our obligation to society? What are the acknowledged unknowns? What are responsible actions or proactive things we can accomplish with AI implementations, and what are the proper safeguards?

In the end, it makes sense to approach AI as an experimental learning activity, with ups, downs, and delays. “There will periods of learning, periods of diminished returns, and [times when] the exponential gain actually benefits the organization,” Bray said. “You need to be grounded and say, ‘This is how we’ve chosen to position ourselves.’ It will serve as your North Star as you move towards the final goal.”

Read more AI coverage

For more info Tracy Mayor Senior Associate Director, Editorial (617) 253-0065