It appears Microsoft's Bing has been extremely ineffective at removing child pornography. Despite its partnership with Facebook to fight Child Abuse content, researchers found Bing not only displayed such images – it recommended search terms.
The TechCrunch commissioned study made use of online safety startup AntiToxin to gather the results. Before we get into those terms, it's important that readers don't try to replicate the results, as they could end in arrest, regardless of intention.
AntiToxin It found that a number of obvious child pornography terms, including “porn kids” led to the display of illegal imagery on the search engine. In other cases, users searching for variations of “Omegle Kids” were autocompleted to “Omegle Kids Girls 13” and “Kids on Omegle Showing”.
Though safe search was off in these instances, it still shows a massive failing on Microsoft's part to police its platform. Similar searches on Google allegedly return results that aren't clearly illegal, and both have a responsibility to remove the content.
Google has previously announced an AI specifically designed to combat such searches. CSAM can find sexual abuse content seven times faster than manual searches. The AI model is available to others for free, and it's unclear if Microsoft is utilizing it. It does, however, have a ‘PhotoDNA' system designed to help services avoid images of child abuse.
A Disappointing Response
Microsoft has reportedly handled the whole situation very poorly. A spokesperson told TechCrunch an engineering team removed the reported instances and is working on others. However, Anti-Toxin found that some terms still surface illegal results.
Perhaps the most concerning part about this report is that Microsoft is supposed to be considered an industry leader in such efforts. Its PhotoDNA tech is used by several other companies, with even governments interested.
Microsoft says it combines this tech with human volunteers, but it's clear that wasn't' enough in this case. In 2017, former employees sued Microsoft, claiming the company didn't have proper programs in place to deal with the impact of viewing the disturbing content.
Since, the company has refused to confirm if all of the terms are gone and has not been transparent about the solution. It also won't disclose the number of human moderators. Clearly, there are some legal implications for the platform, but a clear and apologetic approach would go a long way.