Facebook And Youtube Are Trying—And Failing—To Contain Fallout Of New Zealand Shooting Footage
By Helen A. S. Popkin
Facebook, YouTube and other Internet companies sought to contain the fallout after a mass shooter broadcast a 17-minute video of his deadly attack on two New Zealand mosques and left a trail of references to other hate crimes on social networks, highlighting these ubiquitous’ platforms role in fostering extremist views.
New Zealand police say the gunman, an Australian citizen, killed 49 people and injured 48 when he opened fire on two mosques during prayer service, first live-streamed the attack on Facebook. Replays of the live video quickly spread to Instagram, YouTube, Twitter and Reddit. Though the original videos and links related to the Christchurch shootings were removed, Silicon Valley giants remain engaged in a game of whack-a-mole as segments of the video as well as related propaganda were posted and reposted across the Internet.
At the time of this publishing, clips of the video could still be found on the controversial 8chan forum, where a person claiming to be the shooter discussed the shootings and included a link to the Facebook page that live-streamed the fatal attacks, as well as a 74-page manifesto.
The Internet trail left by the suspected shooter, who at one point exhorted watchers to “subscribe to PewDiePie,” a popular YouTube streamer who’s had to apologize for anti-Semitic remarks, and made references to far-right violence, suggested he had spent forums dedicated to extremism, and wanted to play to an avid Internet audience. Before they were removed both the Facebook page and a Twitter account connected to the shooter included links to YouTube videos supporting the white nationalist and anti-immigration views echoed in the manifesto.
Facebook, alerted to the video by New Zealand police, removed both it and the poster’s Facebook and Instagram accounts shortly after the live stream began. The social network said in a statement that it continues to remove any content praising the attack “as soon as we’re aware.”
The social network on Friday also announced a new technology designed to help prevent the viral spread of images and video. Moderators can now create a digital fingerprint of a problematic image which, according to Facebook can “stop it from ever being shared on our platform in the first place. It’s unclear how such a technology could have prevented the initial spread of a live-streamed video taken by a the gunman
Google, YouTube’s parent company, also said it was doing its best to remove the video clips. In the aftermath of the shooting, clips of the livestream continued to get shared. The rapid spread of the video, even after Facebook and YouTube started to take it down, underscored the scale of the problem created by platforms that have used a largely automated process to remote bad content. Even as they add more human reviewers, the explosive growth of video has swamped buffed-up controls.
Facebook, already in the uncomfortable spotlight as the U.S. federal government reviews its data-sharing deals with outside entities, had previously placed the onus on Facebook users to flag videos, and even then, the content reviewer was tasked with finding the problematic content within the video, regardless of length.
In response to the Russian election interference scandal in 2016, Facebook added an additional 5,000 security and community positions by the end of 2018. It now has 7,500 content moderators around the world tasked with providing 24/7 coverage. Google, in 2017, announced that it would hire 10,000 new content moderators and develop artificial intelligence to detect and prevent the spread of harmful content on YouTube. The inefficacy of this mandate was revealed last month when advertisers again threatened to boycott the popular streaming platform when it was again revealed that sexual comments continued to appear on videos featuring children.
Despite the horror of the New Zealand mass shooting spread through social media, it would be disingenuous for any social media outlet to describe it as unexpected. According to a 2017 BuzzFeed analysis, at least 45 instances of live-stream violence have occurred on Facebook Live since its debut in 2015. These include beatings, murders, rape and suicide. And just as the New Zealand gunman allegedly used social media to encourage more violence and spread racist propaganda, ISIS is far more sophisticated in its well-documented use of social media platforms as a successful tool of radicalization and recruitment.