The Federal Trade Commission (FTC) has given itself with enhanced authority to demand documents during investigations involving artificial intelligence (AI). The unanimous decision by officials to approve a resolution allows Commission staff to issue civil investigative demands (CIDs), akin to subpoenas, when probing AI-centric corporate activities. This new power streamlines the documentation request process and is designated to last for a decade. The agency acknowledges AI’s potential for both innovation and misuse, and through this move, aims to bolster its capability to regulate fraudulent or deceptive AI practices effectively.
Cracking Down on AI Misconduct
The FTC underscores that whilst AI can be leveraged for myriad advantageous applications, it also poses a risk of being employed in fraudulent, deceptive, or privacy-infringing activities. The appropriated resolution aids in expediting investigations in the evolving sphere of artificial intelligence, which is becoming increasingly significant in the trade regulation landscape. Concerns raised by the agency include the potential for dominant companies to monopolize essential AI inputs or technologies, which could present anticompetitive issues.
Combatting Voice Deepfakes
In parallel with enhancing its investigatory capacities, the FTC is confronting the emerging issue of voice deepfakes — synthetic audio generated by AI algorithms that can mimic human voices with alarming accuracy. This deceptive technology can lead to serious infractions, such as unauthorized access to sensitive information or extortion.
To address this growing concern, the FTC has initiated the “FTC Voice Cloning Challenge,” seeking innovative, multidisciplinary solutions to detect and deter AI-generated audio fraud. The winners of this challenge are set to receive a $25,000 reward, with additional monetary prizes for runners-up and honorable mentions. Should the competition not yield effective strategies, the FTC warns of a potential need for more stringent regulation of voice cloning technologies to forestall their detrimental application in the market.
As part of its comprehensive approach, the FTC asserts its commitment to enforcing existing laws and continuing the search for preventative tools against AI-related harms. Specific scrutiny is directed toward prominent entities such as OpenAI’s ChatGPT, to ensure their compliance with consumer protection laws that safeguard against data privacy violations and reputational damage. The focused oversight by the FTC signifies a proactive stance in the regulatory realm, adapting to the challenges posed by the rapidly advancing field of artificial intelligence.
Earlier this month, Microsoft has devised a comprehensive plan to safeguard electoral processes from the threat of AI-generated deepfakes and potential misinformation. The company is preparing to introduce a service named “Content Credentials as a Service,” a tool designed to preserve the integrity of political content. Developed by the Coalition for Content Provenance and Authenticity (C2PA), this initiative intends to apply a digital watermark that certifies the authenticity of campaign materials, including detailed metadata about the content’s origins.
Last Updated on November 8, 2024 10:00 am CET