Credit: Maderla / Shutterstock
As artificial intelligence sees unprecedented growth and industry use cases soar, concerns mount about the technology’s risks, including bias, data breaches, job loss, and misuse.
According to research firm Arize AI, the number of Fortune 500 companies citing AI as a risk in their annual financial reports hit 281 this year. That represents a 473.5% increase from 2022, when just 49 companies flagged the technology as a risk factor.
Given the scope and seriousness of the risk climate, a team of researchers that included MIT Sloan research scientist has created the AI Risk Repository, a living database of over 700 risks posed by AI, categorized by cause and risk domain. The project aims to provide industry, policymakers, academics, and risk evaluators with a shared framework for monitoring and maintaining oversight of AI risks. The repository can also aid organizations with their internal risk assessments, risk mitigation strategies, and research and training development.
The AI Risk Database details 777 different risks cited in AI literature to date.
While other entities have attempted to classify AI risks, existing classifications have generally been focused on only a small part of the overall AI risk landscape.
“The risks posed by AI systems are becoming increasingly significant as AI adoption accelerates across industry and society,” said Peter Slattery, a researcher at MIT FutureTech and the project lead. “However, these risks are often discussed in fragmented ways, across different industries and academic fields, without a shared vocabulary or consistent framework.”
Creating a unified risk view
To create the risk repository, the researchers searched academic databases and consulted other resources to review existing taxonomies and structured classifications of AI risk. They found that two types of classification systems were common in existing literature: high-level categorizations of causes of AI risks, such as when and why risks from AI occur; and midlevel categorizations of hazards and harms from AI, such as using AI to develop weapons or training AI systems on limited data.
Both types of classification systems are used in the AI Risk Repository, which has three components:
- The AI Risk Database captures 777 different risks from 43 documents, with quotes and page numbers included. It will be updated as new risks emerge.
- The Casual Taxonomy of AI Risks classifies how, when, and why such risks occur, based on their root causes. Causes are broken out into three categories: entity responsible (human or AI), the intentionality behind the risk (intentional or unintentional), and the timing of the risk (pre-deployment or post-deployment).
- The Domain Taxonomy of AI Risks segments risks by the domain in which they occur, such as privacy, misinformation, or AI systems safety. This section mentions seven domains and 23 subdomains.
The two taxonomies can be used separately to filter the database for specific risks and domains, or they can be used in tandem to understand how each casual factor relates to each risk domain. For example, a user can use both filters to differentiate between discrimination and toxicity risks when AI is deliberately trained on toxic content from the outset, and instances of risk where AI inadvertently causes harm after the fact by displaying toxic content.
As part of the exercise, the researchers uncovered some interesting insights about the current literature. Among them:
- Most risks were attributed to AI systems rather than to humans (51% versus 34%).
- Most of the risks discussed occurred after an AI model had been trained and deployed (65%) rather than before (10%).
- Nearly an equal number of intentional (35%) and unintentional (37%) risks were identified.
Putting the AI Risk Repository to work
The MIT AI Risk Repository will have different uses for different audiences.
Related Articles
Policymakers. The repository can serve as a guide for developing and enacting regulations on AI systems. For example, it can be used to identify the type and nature of risks and their sources as AI developers aim to comply with regulations like the EU AI Act. The tool also creates a common language and set of criteria for discussing AI risks at a global scale.
Auditors. The repository provides a shared understanding of risks from AI systems that can guide those in charge of evaluating and auditing AI risks. While some AI risk management frameworks had already been developed, they are much less comprehensive.
Academics. The taxonomy can be used to synthesize information about AI risks across studies and sources. It can also help identify gaps in current knowledge so efforts can be directed toward those areas. The AI Risk Repository can also play a role in education and training, acclimating students and professionals to the inner workings of the AI risk landscape.
Industry. The AI Risk Repository can be a critical tool for safe and responsible AI application development as organizations build new systems. The AI Risk Database can also help identify specific behaviors that mitigate risk exposure.
“The risks of AI are poised to become increasingly common and pressing,” the MIT researchers write. “Efforts to understand and address these risks must be able to keep pace with the advancements in deployment of AI systems. We hope our living, common frame of reference will help these endeavors to be more accessible, incremental, and successful.”
AI Executive Academy
In person at MIT Sloan
Register Now