Microsoft has devised a comprehensive plan to safeguard electoral processes from the threat of AI-generated deepfakes and potential misinformation. The company is preparing to introduce a service named “Content Credentials as a Service,” a tool designed to preserve the integrity of political content. Developed by the Coalition for Content Provenance and Authenticity (C2PA), this initiative intends to apply a digital watermark that certifies the authenticity of campaign materials, including detailed metadata about the content’s origins.
Combating Misinformation in Politics
To combat the misuse of political content, including inaccurate information propagation, Microsoft is setting up a Campaign Success Team. This team, equipped with extensive expertise in AI and cybersecurity, will provide guidance to political campaigns. In addition to this proactive step, Microsoft is creating an Election Communications Hub to further secure election operations. This hub will benefit from the support of Microsoft’s security engineers and will be pivotal in safeguarding the integrity of the election communications.
Microsoft’s President, Brad Smith, and Teresa Hutson, CVP Technology for Fundamental Rights, have indicated the company’s advocacy for legal reforms to enhance the protection against deceptive AI technologies used against political campaigns. The Protect Elections from Deceptive AI Act already lays groundwork for this effort, exempting only select uses such as parody, satire, and journalistic purposes.
Enhancing Credible Information Access
In its ongoing struggle against election misinformation, Microsoft pledges to utilize Bing, its search engine, to prioritize authoritative news sources. Collaborating with the National Association of State Election Directors (NASED), Microsoft aims to ensure that the election news presented to its users is both credible and factual. The approach involves preferential ranking algorithms that feature reputable sources more prominently, with the goal of minimizing the influence of less reliable content.
The commitment to these initiatives reflects Microsoft’s broader stance on the increasing concerns surrounding generative AI, specifically regarding the consequences of its unregulated use. The recent Executive Order issued by President Biden does address concerns of AI privacy, security, and usage accuracy, providing guidance as next-generation technologies become more deeply embedded in various aspects of daily life. Microsoft’s involvement underscores an industry-wide recognition of the significance of maintaining credible and reliable information channels during critical democratic processes.
Meta Takes AI Ad Stance for US Presidential Elections
Starting from next year, Meta will require advertisers to reveal any digital changes made with artificial intelligence in their political ads. The company that owns Facebook wants to control how political campaigns use advanced digital tools before the 2024 U.S. presidential election. Advertisers will have to clearly state when their ads use images, videos, or audio that are either created or significantly altered by digital means.
This is because of the growing worries over the use of deepfakes and other types of synthetic media, which can mislead voters and harm the election process. The success of the policy depends on Meta’s ability to apply these rules and spot modified content.
Last Updated on November 8, 2024 10:10 am CET