OpenAI has announced a comprehensive approach to prevent misuse of its artificial intelligence technology during the 2024 US Presidential election. The generative AI company declared that it would ensure its AI tools such as ChatGPT and DALL-E are harnessed solely for promoting “accurate voting information,” implementing robust policy measures, and boosting transparency. OpenAI confirmed the initiation of an expansive plan to combat the potential abuse of its technologies in the election process.
Strategic Plan to Mitigate AI Misuse
In their efforts, OpenAI has combined the expertise of various internal teams, including safety systems, threat intelligence, legal, engineering, and policy teams. This interdepartmental group is tasked with promptly identifying and addressing any misuse of its technology, which could threaten the integrity of elections not only in the US but across the globe.
OpenAI has also made clear its stance against the deployment of its tools for any political campaigns, lobbying, and the creation of AI chatbots designed to impersonate candidates or government organizations. The firm will actively block attempts to use its technology in distributing misleading voting information or in conveying messages that could unjustly influence voter turnout.
Innovations for Detecting Deepfakes
To enhance the detection of deepfake imagery, OpenAI is developing a new tool known as a “provenance classifier.” This tool aims to identify images generated by AI, including those that have undergone common modifications. OpenAI anticipates that journalists among other professional testers will soon receive access to this tool to effectively differentiate between authentic and AI-manufactured images.
Moreover, OpenAI seeks collaboration with the public by enabling individuals to report suspected violations directly to the company. This action follows scrutiny over Microsoft's Copilot chatbot, powered by OpenAI's technology, which had provided inaccurate information about past elections. In response, Microsoft has also indicated that it would introduce tools to assist political entities in certifying the authenticity of their digital content, such as advertisements and videos.
Tech Companies Adapting to AI Influence
As artificial intelligence becomes increasingly integrated within the sociopolitical sphere, OpenAI's initiative reflects a growing need for tech companies to take proactive steps in ensuring their platforms do not become conduits for disinformation, especially in the critical context of democratic elections.
Google has already taken a similar approach to OpenAI. Last month, the company announced its own measures to prevent the influence of AI during the elections. In response to the challenges presented by the proliferation of misinformation in previous electoral cycles, the search engine giant has resolved to limit responses from its generative AI tools to queries related to upcoming elections.
In November, Microsoft Security warned of how AI will impact the US Presidential Elections. The report, titled “Protecting Election 2024 from Foreign Malign Influence”, highlights the potential interference from authoritarian states using AI and other advanced technologies.