HomeWinBuzzer NewsMicrosoft and StopNCII Partner to Combat Revenge Porn On Bing

Microsoft and StopNCII Partner to Combat Revenge Porn On Bing

Microsoft has partnered with StopNCII to combat non-consensual intimate imagery (NCII) on Bing search using digital hash technology.

-

has partnered with StopNCII to tackle the issue of non-consensual intimate imagery (NCII), commonly known as revenge porn, on the Bing search platform. The collaboration focuses on reducing the proliferation of this sensitive content by employing advanced digital hash technology.

The Role of Digital Hashes

StopNCII, organized by the Revenge Porn Helpline, allows users to create digital hashes from their intimate images and videos without uploading the actual content. These hashes are stored in a database, making it possible to identify and remove the same or similar images online. The initiative also includes major platforms like Facebook, TikTok, Reddit, Pornhub, Instagram, OnlyFans, and Snapchat.

In a March update, Microsoft integrated its PhotoDNA tech with StopNCII. PhotoDNA generates digital fingerprints of harmful images, enabling their detection and removal industry-wide. ThE method is designed to protect user privacy while combating the spread of NCII.

Microsoft has been testing the use of StopNCII's hash database to clean up Bing's search index. By late August, the initiative had taken down 268,899 images. The advent of AI has added complexity to the problem, as deepfake technology can generate nudity from non-intimate photos, causing distress to the depicted individuals.

Challenges from AI-Generated Content

A 2019 report from DeepTrace (now Sensity) found that 96% of deepfake videos online are pornographic, with nearly all using women's likenesses without consent. The images are often uploaded for revenge, extortion, or financial gain.

AI-generated images may not match existing PhotoDNA hashes, requiring individuals to manually report such images to Microsoft, Google, and other platforms. Microsoft provides a “Report a Concern” page for users requesting the removal of both real and synthetic images from Bing.

Policy and Advocacy Measures

In July, Microsoft published a whitepaper with recommendations for policymakers to protect Americans from abusive AI deepfakes, highlighting the need to safeguard women and children from online exploitation. The company has also updated its safety measures across its services to prohibit the sharing or creation of intimate images without consent, including tech-altered NCII content.

Microsoft's policy includes prohibiting threats to share NCII, known as intimate extortion. New safety principles, guided by NGOs Thorn and All Tech is Human, aim to reduce child sexual exploitation risks across Microsoft's AI services.

Young people concerned about the release of intimate images can report to NCMEC's Take It Down service. Microsoft continues to evolve its strategy to address these harms and works with global leaders and experts to advocate for policy changes and ensure justice for victims.

SourceMicrosoft
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

Mastodon