Facebook has built a machine learning model that will flag photos and videos for misinformation. The AI will pass its concerns onto humans, who will review the post and decide if it’s fabricated, out of context, or have unsubstantiated claims.

The crackdown fills a giant gap in the social media’s vision. Though it has been fact-checking articles for some time, they aren’t the sole culprit. Pages can deceive users with paid actors, photoshop, or incorrect text overlays. By doing so, they can circumvent Facebook’s checks.

According to Facebook, its new AI uses a number of factors to decide whether an image or video needs review. It notes user feedback, compares OCR text with headlines, and is working on detection of photo or video manipulation.

Advertisement

Facebook sends the post to third-party fact checkers, who look at metadata, reverse image searches, and more to verify it. The ratings from these fact-checkers will be fed back into the model, making it more accurate

The Danger of Bias

Facebook’s move is likely to please legislators, who have been calling for it to reduce the spread of fake news. When Mark Zuckerberg faced the Senate, much of the focus was on the reduction of Russian interference.

However, the possibility of bias in AI has also been a hot topic recently. During his hearing, Twitter CEO Jack Dorsey admitted a bug removed a number of non-rule breaking posts from its search.

The dataset AI is trained on is often a big factor in bias. A Microsoft research study revealed how sexist bias can be amplified as AI makes assumptions about photos. Facebook’s model will hinge partially on the ratings of its users and third-parties, and some will wonder what it’s doing to ensure their integrity.

After all, information doesn’t always fall into black and white categories. Studies can contradict each other, and personal opinions can sway which a person believes. The line between parody and misinformation is even finer. More recent research even suggests that AI can develop bias even without a faulty dataset.

Ultimately, though, Facebook has little choice. Its platform of 2.23 billion active members would be impossible to review with the help of a machine. As the company works on short-term solutions, it’s also working on technology that will keep up with bad actors in the long-term.

Advertisement