Meta has unveiled new requirements aimed at bolstering ad transparency, mandating that advertisers disclose any digital alterations made using artificial intelligence in their political advertisements. The Facebook parent company has confirmed that the newly adopted policy is set to become effective with the onset of the new year.
“Advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad,” Nick Clegg, Meta's President of Global Affairs, wrote in a Threads post.
Strengthening Election Integrity
In anticipation of the 2024 U.S. presidential election, Meta is tightening the reins on how political campaigns use advanced digital tools. Advertisers will be obligated to explicitly indicate when their advertisements incorporate images, videos, or audio that have been digitally conceived or considerably modified.
This move comes as a response to increasing concerns over the use of deepfakes and other forms of synthetic media, which have the potential to deceive voters and disrupt the electoral process. The effectiveness of the policy hinges on Meta's ability to consistently enforce these rules and accurately detect modified content.
Facing the Challenges of AI
As artificial intelligence continues to evolve at an unprecedented pace, distinguishing between real and AI-generated content is becoming increasingly challenging. Deepfakes, which are meticulously crafted using machine learning algorithms to manipulate audio and visual content, often prove indistinguishable from genuine recordings.
In September, I reported on a poll that found around half Americans think AI misinformation will directly impact the election. Recent data from an Axios-Morning Consult AI Poll reveals that half of Americans anticipate that misinformation propagated by AI will influence the outcome of the 2024 presidential election. Furthermore, a significant one-third of the respondents expressed that their trust in the election results would diminish due to the involvement of artificial intelligence. This growing sentiment might intensify the skepticism and unrest surrounding the first presidential race since the infamous Jan. 6, 2021, attack on the U.S. Capitol.
Meta's policy initiative seeks to mitigate the risk of misinformation by ensuring voters are informed when content has been altered, thereby preserving the integrity of the democratic process. The company has stated that failure to comply with these disclosure requirements will lead to the rejection of ads from the platform, highlighting their commitment to transparency and accountability as the political landscape continues to evolve in the digital age.