The Federal Communications Commission (FCC) has put forward a proposal requiring political advertisements in the United States to disclose the use of artificial intelligence (AI). This initiative, led by FCC Chair Jessica Rosenworcel, aims to enhance transparency in political campaigns as the 2024 elections approach. The proposal underscores the increasing accessibility of AI tools and the necessity for consumers to be aware of their application in political messaging.
Addressing Concerns Over AI in Political Advertising
The FCC's proposal stems from growing concerns about AI-generated content's potential to spread misinformation and create deepfakes—manipulated videos that falsely depict individuals saying things they never did. These concerns are echoed by former Secretary of State Hillary Clinton, who has highlighted the global risks AI poses to election integrity. Microsoft has also raised alarms about the effectiveness of even simple deepfakes in influencing elections and noted that China is using AI to incite unrest among U.S. residents on social media platforms.
Details of the Proposed Rule
The proposed regulation does not seek to ban AI-generated political ads but mandates that their creators disclose the use of AI technology. Rosenworcel has urged her colleagues on the FCC board, which includes four other members, to act promptly on this proposal. With two fellow Democrats on the board, the proposal has a reasonable chance of being adopted.
The proposed rule would apply to broadcast TV and radio, as well as cable and satellite providers, requiring on-air disclosures for AI-generated content in political ads. Political advertisers would also need to provide written disclosures in public files that broadcasters are mandated to maintain. However, the rule's effectiveness is uncertain, particularly because it does not extend to the internet, a major platform for AI-generated political content. While traditional media remains significant, social media and streaming platforms are also crucial for political campaigning.
Historical Context and Previous FCC Actions
The FCC has a history of addressing deceptive practices, such as targeting the Royal Tiger AI robocall crew and implementing net neutrality rules. The new AI disclosure rule represents a proactive step to tackle the evolving challenges posed by AI in the political arena. Existing U.S. election laws prohibit campaigns from fraudulently misrepresenting other candidates or political parties, but it remains unclear if this extends to AI-generated content.
The FCC's proposal aims to initiate a rulemaking process expected to take several months to complete. This comes amid broader legislative efforts to regulate AI in elections. Senators Amy Klobuchar and Lisa Murkowski have introduced the AI Transparency in Elections Act, while Senate Majority Leader Chuck Schumer has emphasized the urgent need for Congress to establish AI regulations, particularly for elections.
Online platforms like Meta have already implemented measures requiring campaigns to disclose the use of deepfakes and banning the use of their generative AI tools for political advertising. Last summer, an attempt to clarify the extension of election laws to AI-created depictions was blocked by Republicans on the Federal Election Commission (FEC), though the FEC has since agreed to revisit the issue.