recent

5 actions to elevate customer experience in physical retail

Scope 3 emissions top supply chain sustainability challenges

3 ways to improve the mortgage market

Credit: Mimi Phan / iStock

Renée Gosline and Sanjeev Vohra

Ideas Made to Matter

Artificial Intelligence

3 ways to center humans in your company’s artificial intelligence efforts

By

ChatGPT, the powerful new artificial intelligence tool from OpenAI that can answer questions, chat with humans, and generate text, has dominated headlines in the past few months. The tool is advanced enough to pass law school exams (though with fairly low scores), but it has also veered into strange conversations and has shared misinformation.

It also highlights an important area that companies using or thinking about using AI need to confront: how to embrace AI in a way that doesn’t harm humans.

“Leadership involves absolutely centering the human and being rigorous before releasing into the wild things that affect these humans,” said a senior lecturer and principal research scientist at MIT Sloan. “Having the courage and ethics to say we want to cultivate a system and a relationship with our customers whereby we don’t simply always extract, but we also share value — that’s what leads to loyalty in the long term.”

At the inaugural MIT Thinker-Fest, hosted by the MIT Initiative on the Digital Economy and Thinkers50, Gosline and Sanjeev Vohra, senior managing director and global lead of applied intelligence at Accenture, discussed how companies can keep humans at the center of their AI efforts, and why that’s important.

People won’t trust companies that they think are causing harm, Gosline said, and they are empowered to join social movements against companies.

“The risk of this is not only to the human, it’s also bad business,” Vohra said.

1. Don’t fall for “frictionless fever.”

Automation makes it easy to remove human actions — or friction — from a variety of experiences. For example, customers can simply scan their faces with their phones to make some purchases instead of manually entering information.

But “frictionless fever,” or removing all friction, isn’t necessarily a good thing. Gosline believes a better approach is to remove “bad friction” and add “good friction.”

6 %

6% of companies using AI had implemented responsible AI practices, according to a 2022 Accenture survey.

For example, ChatGPT can frictionlessly create a vast amount of content, which can then be cited by search engines. While this easily increases the scale of information, “it can increase the scale of misinformation,” Gosline said. “It can also reduce the human element in terms of discernment, which is a good form of friction.”

And while it can be frictionless to give away data access to companies or algorithms, it can be difficult to reverse course. Gosline compared this to a “digital lobster trap.”

“Just because I’ve given permission or I’ve entered and followed the bait to give access to some of my data, it doesn’t mean that while I’m there, you now have the right to do with me what you wish,” she said. “So being careful about digital lobster traps being frictionless to enter and friction-filled to leave is a concern that we should all take on.”

And customers lose trust in companies if they feel that their privacy is being invaded or that they are being manipulated, Gosline said, citing research by Chris Gilliard.

Consent mechanisms, or ways for users to be kept informed about and opt-in to data sharing (e.g., “I agree” buttons), “are the right mechanisms in terms of making sure people or your customers and users are aware of what they are sharing and for what purpose are they sharing,” Vohra said.

2. Think about the ethics and accuracy of generative AI.

Generative AI like ChatGPT or Dall-E, which creates images based on text prompts, poses new ethical issues for businesses, such as who profits from generated content.

“Are we literally taking power and food from the mouths of content creators?” Gosline asked. “Generally speaking, the people who are most harmed by this don’t have the most power.”

Companies also need to consider who’s building generative models — especially foundational models, such as GPT-3 and Dall-E, that can be reused for different purposes — and what their incentives are.

There’s a danger that only existing power players will benefit from these new sources of revenue, Gosline said. Instead, she contended, a wide variety of people should be able to benefit by creating content and sharing services and ideas.

“People should ask: What kind of business do we want to be in? What kind of leaders are we? Do we want to be the kind of people who calcify existing structures?” Gosline said. “Do we want to be the kind of people who box out others who would otherwise gain access? ... These are important questions. And these are questions that cannot be answered by a model. These are uniquely human questions.”

Furthermore, companies that rely on creating content, such as advertising agencies, need to be cautious before they rely on large language models to do the work for them. Some argue ChatGPT creates “shallow” content, and it has been found to share conspiracy theories and inaccurate information.  

“Be prepared for the result,” Gosline said. “I think the best-case scenario is insipid output. But the worst-case scenario is error. The worst-case scenario is inaccuracy and bias that’s mass-produced.”

3. Implement responsible AI practices from the beginning.

AI can introduce harm to people, society, brands, and reputations, Vohra said, and the intensity of harm can vary. Using AI to generate sales forecasts might lead to inaccurate orders, but likely won’t harm people, he said, as opposed to using AI for surveillance, hiring, or recruitment. “You don’t want it to be frictionless, as you want good friction there,” he said.

Related Articles

Human-centered AI fights bias in machines and people
Who trusts bots, and why
Report details the business benefits of ‘responsible AI’

Vohra said most companies are just beginning to use AI — a 2022 Accenture research report found that 12% of companies were “AI achievers,” meaning that they had both foundational AI capabilities and were working on adopting AI. And only 6% of companies using AI had implemented responsible AI practices, the survey found, though 42% said they aim to do so by 2024.

The following are key to establishing responsible AI initiatives:

Building policies linked to core values. “The leadership has to feel it. What is the core value of the company, how do they want to use AI, and where do they want to use AI?” Vohra asked.

Creating an operating framework in terms of how AI use will be governed across the company. This includes not just the technology team but compliance, legal, and other functional teams.

Including fairness and explainability when building AI tools. If AI makes a decision or helps a human make a decision, how did it do that? For example, an auto insurance company using AI for claims adjustments would need to explain how the system looks at claims and makes estimates. Make sure AI is accountable and open to the consumer, Vohra said.

Being clear about when you are using AI. “As we go along in the next few years, I think you will see more regulations around this process as well,” Vohra said. “Companies will have to declare it and be transparent to the consumer as well as to the regulators for it.”

Watch “Can AI put humans first?”

For more info Sara Brown Senior News Editor and Writer