HomeWinBuzzer NewsLinkedIn Embraces AI to Thwart Fake Profiles

LinkedIn Embraces AI to Thwart Fake Profiles

LinkedIn now uses a new machine learning model to find fake or inappropriate profiles following the previous model that was not scalable.

-

networks face a major battle in keeping peace on their platforms. One major problem is people creating profiles to post inappropriate content, which could be anything from offensive images, false advertising, or profanity and hate speech. AI is helping to handle such profiles and has explained how it handles the situation.

The -owned platform has over 300 million monthly active users. While it's arguable LinkedIn's business-centric model does not make it as open to hate speech as Facebook or Twitter, the platform faces its own specific issues with fake profiles.

In a blog post, software engineer Daniel Gorham explains LinkedIn previously used human curators to find words and phrases on a block list. If words or phrases were found that breached terms of service, a profile would be taken down.

However, the effort behind maintaining such a policing effort was too costly. Gorham points to the following issues with this method:

  • “Scalability. This approach is a fundamentally manual process, and significant care must be taken when evaluating words or phrases. 
  • Context. Many words may be used in both appropriate and inappropriate contexts. For example, the word “escort” is often associated with prostitution, but may also be used in contexts such as a “security escort” or “medical escort.”
  • Maintainability. Blocklists only grow larger over time as more phrases are identified. Tracking performance as a whole is simple, but doing so on a phrase-by-phrase basis is non-trivial. Significant engineering effort is required to ensure the stability of the system is maintained.”

AI Approach

Instead, LinkedIn turned to artificial intelligence and . The company now uses an AI model that has a convolutional neural network. Basically, this is an algorithm that uses image analysis.

“Our machine learning model is a text classifier trained on public member profile content. To train this classifier, we first needed to build a training set consisting of accounts labeled as either “inappropriate” or “appropriate.” The “inappropriate” labels consist of accounts that have been removed from the platform due to inappropriate content.”

SourceLinkedIn
Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News