The AIRisk Repository: A Comprehensive Meta-Review of AI Risks
Introduction
The risks posed by Artificial Intelligence (AI) are significant concerns for various stakeholders including academics, auditors, policymakers, AI companies, and the public. However, a lack of shared understanding of these risks can impede comprehensive discussions, research, and responses. This paper addresses this gap by creating an AIRisk Repository, which serves as a common frame of reference. This repository includes a living database of 777 risks extracted from 43 taxonomies and is accessible via a website and online spreadsheets.
Key Features of the AIRisk Repository
- Living Database: The repository contains 777 risks from 43 documents, with updates planned.
- Two Taxonomies:
- Causal Taxonomy: Groups risks based on entity (human, AI, other), intentionality (intentional, unintentional), and timing (pre-deployment, post-deployment).
- Domain Taxonomy: Classifies risks into seven domains (discrimination & toxicity, privacy & security, misinformation, malicious actors & misuse, human-computer interaction, socioeconomic & environmental, AI system safety, failures, & limitations), further divided into 23 subdomains.
Development Process
- Systematic Review: A systematic search strategy was used to identify relevant documents, followed by a forward and backward search approach.
- Expert Consultation: Experts were consulted to suggest additional research and refine the classifications.
- Best-Fit Framework Synthesis: A best-fitting framework synthesis approach was employed to develop the taxonomies, ensuring they effectively categorized the risks.
Causal Taxonomy of AI Risks
Category Level |
Description |
Entity |
Human, AI, Other |
Intent |
Intentional, Unintentional |
Timing |
Pre-deployment, Post-deployment, No Clear Time |
Domain Taxonomy of AI Risks
- Discrimination & Toxicity
- 1.1 Unfair Discrimination and Misrepresentation
- 1.2 Exposure to Toxic Content
- 1.3 Unequal Performance Across Groups
- Privacy & Security
- 2.1 Compromise of Privacy
- 2.2 AI System Security Vulnerabilities and Attacks
- Misinformation
- 3.1 False or Misleading Information
- 3.2 Pollution of Information Ecosystem and Loss of Consensus Reality
- Malicious Actors & Misuse
- 4.1 Disinformation, Surveillance, and Influence at Scale
- Human-Computer Interaction
- Socioeconomic & Environmental
- AI System Safety, Failures, & Limitations
Conclusion
The AIRisk Repository is the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database. This creates a foundation for a more coordinated, coherent, and complete approach to defining, auditing, and managing the risks posed by AI systems.
This document provides a clear and accessible resource for understanding and addressing a wide range of risks associated with AI. It offers valuable insights for policymakers, researchers, and practitioners in the field.