The AI Incident Database, a non-profit initiative originally conceived under the Partnership On AI, has been established to meticulously track and catalog failures and mishaps associated with artificial intelligence (AI) technologies. With funding provided by Underwriters Laboratories, a prominent independent testing laboratory established in 1894, the database aims to bridge the information gap between AI developers and the general public by promoting transparency. As of its latest update, the database has documented over 600 unique incidents involving automation and AI.
Cataloging AI's Unintended Consequences
Among the documented incidents are numerous cases highlighting the diverse range of problems AI can cause. These range from political deepfakes to accidents involving autonomous vehicles. Patrick Hall, an assistant professor at the George Washington University School of Business and a board member of the database, highlights the importance of understanding AI's real-world impacts beyond theoretical risks. The database reports incidents through various sources, including media coverage and social media channels, bringing to light many issues that might otherwise go unnoticed. Heather Frase, a senior fellow at Georgetown's Center for Security and Emerging Technology and a director at the organization, underscores the potential for AI to inflict both psychological and physical harm, especially as the technology advances.
Future Prospects and Calls for Volunteer Involvement
The organization is actively seeking volunteers to help expand its coverage and raise awareness of AI incidents. It operates with the dual goal of leveraging AI's potential while ensuring its safe application. According to Frase and Hall, the database serves as a crucial tool not just for cataloging past incidents, but also for helping policymakers and developers design safer AI systems and regulations. By measuring and understanding the scope and nature of AI-related incidents, the initiative aims to foster an environment where AI can be developed and used responsibly, mitigating its risks to society.
Big Tech Agreement Over AI Election Misinformation
Last month, 20 technology conglomerates, including leaders like Microsoft, IBM, and Meta, pledged alliance towards mitigating the misuse of artificial intelligence in influencing election outcomes. These companies have affixed their signatures to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” announced during the Munich Security Conference.
The accord specifically targets the proliferation of “AI-generated audio, video, and images” designed to deceitfully misrepresent political figures or misguide voters about electoral processes. With the advent of increasingly sophisticated AI technologies, including generative adversarial networks, the production of “deepfakes” has raised significant concerns.