HomeWinBuzzer NewsMicrosoft CEO Addresses Designer AI Safety Concerns in Wake of Taylor Swift...

Microsoft CEO Addresses Designer AI Safety Concerns in Wake of Taylor Swift Deepfake Incident

The spread of unauthorized deepfakes of singer Taylor Swift has sparked outrage and calls for stricter regulation.

-

The online circulation of artificial intelligence-generated explicit images of singer Taylor Swift has prompted calls for stricter of deepfake content. After the images spread rapidly online, there has been a widespread clamor for legal action against the creation and distribution of such material.

has encouraged Congress to take legislative steps to address the issue. White House press secretary Karine Jean-Pierre stressed the urgency of tackling the creation and distribution of non-consensual explicit imagery. Microsoft CEO Satya Nadella also expressed his concern, advocating for enhanced safety measures in online platforms.

At the same time, has addressed security loopholes in its Designer AI software, following an exposé linking the platform to the creation of unauthorized deepfake imagery of celebrities. An investigation by 404 media revealed that users were bypassing Microsoft's protections to generate explicit content featuring public figures, despite company measures to prevent such misuse.

Celebrity Deepfakes Kickstart Regulators

In response to the proliferation of sexualized , US House of Representatives members Joe Morelle (D-NY) and Tom Kean (R-NJ) have put forth the Preventing Deepfakes of Intimate Images Act. The proposed bill aims to criminalize the creation and dissemination of non-consensual AI-generated pornography. If passed, violators could face severe penalties, including prison sentences of up to ten years.

The unauthorized dispersal of AI-manipulated explicit imagery involving pop superstar Taylor Swift has ignited intense public and corporate concern regarding the safety and ethical use of artificial intelligence. Nadella, expressed that such content is “alarming and terrible.” He emphasized the necessity of swiftly implementing safeguards to prevent Microsoft's Designer software from being misused to produce harmful material, underscoring the universal benefit of a safe online environment for both content creators and consumers.

An underground Telegram channel reportedly advised members to manipulate Microsoft's image-generating AI by altering input prompts. These tricks included misspelling names or using coded language to evade content filters. Investigations showed that simple changes, such as referring to a celebrity with descriptors like ‘actor' or ‘singer' followed by their name, could hoodwink the AI's restrictions.

Despite the inherent safeguards, dedicated individuals found ways to generate prohibited content, prompting a reevaluation of the system's effectiveness. Microsoft's AI, while designed to block explicit material, struggled to interpret the intention behind creatively misspelled or rephrased prompts. In response, Microsoft has reportedly made adjustments that shut down these specific bypass methods.

SourceABC News
Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News