HomeWinBuzzer NewsMIT Researchers Launch AI Risk Repository with 700+ Cases for Industry Use

MIT Researchers Launch AI Risk Repository with 700+ Cases for Industry Use

MIT has launched an AI Risk Repository to catalog and classify AI risks, using a two-dimensional system that includes data from 43 taxonomies.

-

Experts at the Massachusetts Institute of Technology (MIT) have launched an extensive database called the AI Risk Repository. The new tool is intended to catalog and classify a broad spectrum of risks associated with artificial intelligence technologies. The repository collates data from 43 different taxonomies and identifies over 700 unique risks, aiming to guide policymakers, researchers, and industry players through the intricate terrain of AI risks.

Standardizing AI Risk Classification

Before this initiative, efforts to categorize AI risks were disorganized, resulting in varied and incomplete classifications. Peter Slattery, a soon-to-be postdoc at MIT FutureTech and the project's lead, told TechCrunch that these attempts were like “pieces of a jigsaw puzzle.” The AI Risk Repository aims to solve this by integrating data from a diverse range of sources, including peer-reviewed journals and reports.

The repository employs a two-dimensional classification system. First, it categorizes risks based on their origins, assessing entities responsible (human or AI), whether the act was intentional or not, and the stage at which it occurs (before or after deployment). Using a causal framework enables a deeper understanding of how AI risks emerge. Secondly, risks are sorted into seven key areas, such as discrimination, privacy, misinformation, and malicious use.

Practical Applications for Organizations

Organizations can use the AI Risk Repository as a practical tool for risk identification and mitigation. For instance, a company creating an AI-driven hiring platform could consult the repository to pinpoint potential bias and discrimination risks. Similarly, firms using AI for content might examine the “Misinformation” section to better understand relevant risks and implement safeguards accordingly.

Neil Thompson, head of the MIT FutureTech Lab, highlighted that the repository is not static but will be continually updated with new risks, research developments, and trends. The research team intends to regularly add new entries and seek expert feedback to ensure the remains comprehensive and relevant. The ongoing update strategy aims to provide the most pertinent information for various stakeholders, from AI developers to large-scale AI users.

The repository's creation was motivated by the need to map overlapping areas and gaps in research. MIT researchers collaborated with the University of Queensland, the Future of Life Institute, KU Leuven, and AI startup Harmony Intelligence. Designed to be publicly accessible, the repository allows anyone to explore, copy, and utilize its categorizations.

The next phase of the project will assess how effectively different AI risks are being addressed and identify gaps in organizational responses. The expansive database is designed to enhance oversight and provide a thorough overview of AI risks, saving time for researchers and decision-makers.

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

Mastodon