The terrorist attack in Christchurch, New Zealand two weeks ago shocked the world and was described by the country's Prime Minister as New Zealand's darkest hour. During the attack, a man livestreamed his rampage on Facebook Live as he claimed the lives of 49 people.
Obvious questions emerged from this event and once again put a spotlight on social media providers. Should Facebook be more accountable when people abuse its platform in this way? Some now say yes.
However, one question I and others almost instantly shared when the news of the attack broke was: How was it possible for this man to livestream a mass shooting for 20 minutes. Doesn't Facebook have any way of stopping these kinds of videos the moment they go live?
Logic would suggest yes; the company should have AI models and even human watchers to spot this type of content the instant it goes live. That's not the case as Facebook has admitted its AI is not sophisticated enough to stop this kind of video.
In other words, the social network says it cannot control Facebook Live. With that in mind, Facebook is putting the whole Live model in jeopardy. If it cannot be controlled, why should it exist? AI on the network is not good enough to detect and remove extremist content fast enough.
Because of this, Facebook says the live broadcast of the Christchurch mass shooting was watched by 200 viewers and then 4000 joined once the livestream was over. From there, copies were made, then copies of copies until the video had been shared millions of times.
The video is now out there and can be found, and for some observers Facebook allowed this to happen. In its defense, the company says the Facebook Live broadcast was not reported so it could not act fast enough.
Since then, the company has worked to remove 1.5 million versions of the video. This shows part of the problem and has led critics to suggest Facebook simply must do more to find this type of content the moment it goes live.