MIT researchers release a repository of AI risks

0
22
Artificial Intelligence
Image Credit: tekedia.com

As AI systems become increasingly integrated into critical functions across society, the question of how to regulate their use grows more complex. Whether it’s AI controlling critical infrastructure, scoring exams, sorting resumes, or verifying travel documents, each application carries distinct risks that could have severe consequences if not properly managed.

To help policymakers, industry stakeholders, and researchers navigate these challenges, MIT researchers have introduced a pioneering tool called the AI “risk repository.” This extensive database catalogues over 700 AI-related risks, categorized by factors such as intentionality, domains like discrimination, and subdomains including disinformation and cyberattacks. The goal is to provide a comprehensive, publicly accessible resource that can be used to inform the development of AI regulations, such as the EU AI Act or California’s SB 1047.

“This is an attempt to rigorously curate and analyze AI risks into a publicly accessible, comprehensive, extensible and categorized risk database that anyone can copy and use, and that will be kept up to date over time,” explains Peter Slattery, the lead researcher on the project from MIT’s FutureTech group. According to Slattery, the repository was created out of a pressing need to understand the overlaps and gaps in AI safety research — a need that many in the field share.

Existing frameworks for assessing AI risks often lack comprehensiveness. Slattery and his team found that most frameworks cover only a fraction of the risks identified in the new repository. For example, while over 70% of the frameworks reviewed included privacy and security risks, only 44% mentioned misinformation, and a mere 12% addressed the pollution of the information ecosystem, such as the proliferation of AI-generated spam.

The fragmented nature of current AI risk assessments has serious implications for AI development, deployment, and regulation. “People may assume there is a consensus on AI risks, but our findings suggest otherwise,” Slattery notes. “When the literature is this fragmented, we shouldn’t assume that we are all on the same page about these risks.”

To build the repository, MIT collaborated with researchers from the University of Queensland, the nonprofit Future of Life Institute, KU Leuven, and the AI startup Harmony Intelligence. Together, they reviewed thousands of academic documents on AI risk evaluation, creating what could become a foundational resource for anyone working on AI safety.

However, the question remains whether this repository will be utilized effectively. AI regulation around the world is currently disjointed, with different approaches that often lack cohesion. Even if there had been a tool like MIT’s risk repository in place earlier, it’s unclear whether it would have changed the course of AI regulation or how it is currently implemented.

Moreover, identifying the risks is only the first step. Addressing them requires robust frameworks and regulatory mechanisms, which are often lacking in today’s landscape. Many existing AI safety evaluations have significant limitations, and a database of risks alone may not be sufficient to overcome these challenges.

Despite these hurdles, the MIT team is committed to pushing forward. Neil Thompson, head of the FutureTech lab, notes that the next phase of their research will focus on evaluating how well these identified risks are being addressed. “Our repository will help us in the next step of our research when we will be evaluating how well different risks are being addressed,” Thompson said. “If everyone focuses on one type of risk while overlooking others of similar importance, that’s something we should notice and address.”

As AI continues to evolve and its applications become more pervasive, tools like the AI risk repository could play a crucial role in shaping how these technologies are governed, ensuring that the benefits of AI are realized while minimizing the potential harms.

LEAVE A REPLY

Please enter your comment!
Please enter your name here