Credit: PhonlamaiPhoto / iStock
Artificial intelligence is revolutionizing industries, by automating customer service, optimizing supply chains, personalizing marketing campaigns, and in countless other ways. But with great power comes great responsibility — and risks.
Organizations today must work to ensure that the AI systems they build or implement are safe, secure, unbiased, and transparent, according to Thomas Davenport, a Babson University professor and visiting scholar at the MIT Initiative on the Digital Economy.
During a recent webinar hosted by MIT Sloan Management Review, Davenport highlighted a number of ethical risks that AI can introduce to businesses. These include algorithmic bias in machine learning, varying levels of model transparency, cybersecurity vulnerabilities, and the possibility that AI will serve users insensitive or inappropriate content. Organizations must also contend with whether AI will deliver useful results.
To counteract these risks, organizations need to embed ethical practices into AI solutions from the start, Davenport said. They also need to ensure that teams and individuals are engaged in ethical AI as a part of their everyday work.
Here’s a look at several responsible AI strategies that organizations are using today and how businesses can progress from discussing AI ethics to taking action.
AI strategies at Unilever and Scotiabank
Companies are adopting a variety of strategies to integrate AI ethics into their operations, Davenport said, including appointing heads of AI ethics, performing research about the topic, conducting beta testing, and using external assessors to evaluate use cases.
Related Articles
Consumer packaged goods company Unilever, for example, created an AI assurance function that examines each new AI application to determine its risk level in terms of both effectiveness and ethics, Davenport said. This process requires individuals who propose a use case to fill out a questionnaire. An AI-based application then determines whether the use case is likely to be approved or whether there are underlying problems with the use case. This process ensures that AI models are aligned with ethical guidelines before deployment.
Scotiabank developed an AI risk management policy and a data ethics team to advance a data ethics policy, Davenport said. The policies are part of the Canadian bank’s code of conduct, which all employees must agree to follow each year. The company also requires mandatory data ethics education for all employees working in the customer insights, data, and analytics organization or on other analytics teams. Scotiabank also worked with Deloitte to develop an automated ethics assistant — similar to Unilever’s — that reviews each use case before its deployed. The bank also involves employees in managing unstructured data to determine the most effective answers to customer questions.
“[This] kind of democratization of the process is important not only to your ethics, but also to your productivity as an organization in getting these systems up and running,” Davenport said.
5 stages of AI ethics
Davenport identified five stages that play crucial roles in fostering ethical AI development, deployment, and use within organizations. As companies advance through the stages, they move from talk to action, he noted.
- Evangelism. In this stage, representatives of the company speak internally and externally about the importance of AI ethics.
- Policies. The company deliberates on and approves a set of corporate policies to ensure ethical approaches to AI are established.
- Documentation. The company records data on each AI use case. This could include the use of model cards, which explain how models were designed to be used and how they have been evaluated.
- Review. The company performs or sponsors a systematic review of each use case to determine whether it meets the company’s criteria for responsible AI.
- Action. The company develops a process whereby each use case is either accepted as is, returned to the proposing owner for revision, or rejected.
Davenport said that it’s important for organizations to make strategic plans for integrating ethics into their AI strategy. “What use cases might make sense for us? We develop a model, we deploy the model, we monitor the model, and ethics come into place throughout that entire process,” he said. “That’s what the most successful organizations do with regard to AI ethics.”
Watch the webinar: How to Build an Ethical AI Culture