HomeWinBuzzer NewsFacebook Leverages Humans and AI to Begin Fact-Check of Photos and Videos

Facebook Leverages Humans and AI to Begin Fact-Check of Photos and Videos

Facebook will begin to check videos and images for false information in a similar way to its news, combining third-party reviewers with a machine learning model that will grow from their feedback.

-

Facebook has built a machine learning model that will flag photos and videos for misinformation. The AI will pass its concerns onto humans, who will review the post and decide if it’s fabricated, out of context, or have unsubstantiated claims.

The crackdown fills a giant gap in the social media’s vision. Though it has been fact-checking articles for some time, they aren’t the sole culprit. Pages can deceive users with paid actors, photoshop, or incorrect text overlays. By doing so, they can circumvent Facebook’s checks.

According to Facebook, its new AI uses a number of factors to decide whether an image or video needs review. It notes user feedback, compares OCR text with headlines, and is working on detection of photo or video manipulation.

Facebook sends the post to third-party fact checkers, who look at metadata, reverse image searches, and more to verify it. The ratings from these fact-checkers will be fed back into the model, making it more accurate

The Danger of Bias

Facebook’s move is likely to please legislators, who have been calling for it to reduce the spread of fake news. When Mark Zuckerberg faced the Senate, much of the focus was on the reduction of Russian interference.

However, the possibility of bias in AI has also been a hot topic recently. During his hearing, Twitter CEO Jack Dorsey admitted a bug removed a number of non-rule breaking posts from its search.

The dataset AI is trained on is often a big factor in bias. A Microsoft research study revealed how sexist bias can be amplified as AI makes assumptions about photos. Facebook’s model will hinge partially on the ratings of its users and third-parties, and some will wonder what it’s doing to ensure their integrity.

After all, information doesn’t always fall into black and white categories. Studies can contradict each other, and personal opinions can sway which a person believes. The line between parody and misinformation is even finer. More recent research even suggests that AI can develop bias even without a faulty dataset.

Ultimately, though, Facebook has little choice. Its platform of 2.23 billion active members would be impossible to review with the help of a machine. As the company works on short-term solutions, it’s also working on technology that will keep up with bad actors in the long-term.

SourceFacebook
Ryan Maskell
Ryan Maskellhttps://ryanmaskell.co.uk
Ryan has had a passion for gaming and technology since early childhood. Fusing the skills from his Creative Writing and Publishing degree with profound technical knowledge, he enjoys covering news about Microsoft. As an avid writer, he is also working on his debut novel.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x
Mastodon