Microsoft’s Designer AI Adjusts Text Prompts to Curb Violent and Sexual Imagery Generation

Microsoft updated its AI art tool to block violent/sexual content after an employee flagged the issue.

Microsoft has changed its content policies within the Designer AI image creation tool, following internal alerts and public concerns regarding the generation of violent and sexual content. The adjustments include the disablement of specific text prompts known to yield inappropriate material. The effort to strengthen safety filters and reduce misuse of the technology reflects Microsoft’s commitment to responsible AI use.

Employee Advocacy Sparks Action

The policy updates come in the wake of revelations made by Shane Jones, a Microsoft employee, who highlighted the AI’s potential for generating harmful content. Jones demonstrated the capability of Designer to produce violent imagery and art possibly infringing on copyrights, sharing his concerns with both Microsoft executives and U.S. regulatory authorities. His persistent advocacy underscores the ongoing debate surrounding the ethical implications of AI in public applications. Jones’s actions have led to detailed scrutiny of Designer’s content filtering mechanisms, pushing Microsoft to reassess and tighten its guidelines

Last month,  has addressed security loopholes in its Designer AI software, following an exposé linking the platform to the creation of unauthorized deepfake imagery of celebrities. An investigation by 404 media revealed that users were bypassing Microsoft’s protections to generate explicit content featuring public figures, despite company measures to prevent such misuse.

A shocking wave of fake and explicit images of Taylor Swift, created by abusing Microsoft’s Designer software, has sparked outrage and fear among the public and the company. Nadella, the CEO, condemned the vile content as “horrifying and appalling”. He urged for immediate action to protect the software from malicious use and to ensure a secure and ethical online space for everyone who makes and enjoys content.

Continuous Monitoring and Future Directions

While the recent changes mark a significant step in addressing potential abuses, challenges remain with the AI’s ability to create imagery that could be considered violent, such as “car accidents,” or involve copyrighted characters, like Disney’s Elsa depicted against a backdrop of destruction. Microsoft has confirmed it is “continuously monitoring” and adjusting the AI’s functionalities, with the spokesperson emphasizing the organization’s dedication to “further strengthening our safety filters” to mitigate misuse.

Last Updated on November 7, 2024 9:48 pm CET

SourceCNBC
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x