HomeWinBuzzer NewsBiden Administration Targets AI-Driven Sexual Abuse Imagery

Biden Administration Targets AI-Driven Sexual Abuse Imagery

The goal is to dismantle the financial incentives behind image-based sexual abuse, with a particular focus on websites that advertise explicit images of minors.

-

The Biden administration has called on the technology sector and financial institutions to address the escalating problem of AI-generated abusive sexual imagery. New generative AI tools have made it easier to create realistic, nonconsensual explicit images, which are then shared widely on and chatrooms. Victims, including celebrities and minors, face significant challenges in removing these images once they are online.

Call for Voluntary Cooperation

The administration is seeking voluntary cooperation from companies to implement measures aimed at curbing the creation, distribution, and monetization of nonconsensual AI images. This appeal is directed not only at AI developers but also at payment processors, financial institutions, cloud service providers, and mobile app store gatekeepers. The goal is to dismantle the financial incentives behind image-based sexual abuse, with a particular focus on websites that advertise explicit images of minors.

Arati Prabhakar, Biden's chief science adviser and director of Office of Science and Technology Policy, highlighted the rapid spread of nonconsensual imagery driven by , which has had a profound impact on women and girls. She stressed the urgent need for companies to take swift action.

Legislative Support Needed

While the administration pushes for voluntary commitments, it also recognizes the need for legislative backing. Jennifer Klein, director of the White House Gender Policy Council, emphasized that while company commitments are essential, Congress must also take action. Current laws criminalize the creation and possession of sexual images of children, even if they are AI-generated. However, there is little oversight of the tools and services that enable the creation of such images.

The administration's efforts build on earlier voluntary commitments secured by major technology companies to implement safeguards for new AI systems. Additionally, Biden has signed an executive order to guide to ensure public safety, addressing broader AI concerns, including and AI-generated child abuse imagery.

Industry Response and Challenges

The private sector has responded with commitments from companies like , which has strengthened safeguards after AI-generated images of Taylor Swift circulated online. Schools are also grappling with AI-generated deepfake nudes of students, often created and shared by fellow teenagers.

The administration's document calls for cloud service providers and mobile app stores to restrict applications designed for creating or altering sexual images without consent. It also underscores the need for survivors to have the ability to remove nonconsensual images from online platforms easily.

Federal prosecutors have already taken action against individuals using AI tools to create explicit images of minors. For instance, a Wisconsin man has been charged for using Stable Diffusion to generate such images.

Markus Kasanmascheff
Markus Kasanmascheff
Markus is the founder of WinBuzzer and has been playing with Windows and technology for more than 25 years. He is holding a MasterĀ“s degree in International Economics and previously worked as Lead Windows Expert for Softonic.com.

Recent News