Facebook Bans White Nationalism Content After Being Threatened With Jail [Updated]
[Updated to include Facebook’s policy change, prohibiting the ‘praise, support and representation’ of white nationalism and separatism.]
In the wake of Christchurch, demands for stricter regulation of social media took a serious turn on Wednesday, with the threat of jail for execs who don’t effectively police their platforms and even Microsoft President Brad Smith speaking out. ”Is there some base level of standards of decency or civilization we are going to ask these networks or platforms to be bound to?” he asked, after discussing events with New Zealand Prime Minister Jacinda Ardern earlier in the week. Well, is there?
Facebook and its peers have admitted they can’t police or control what is ‘published’ on their platforms. And Monday’s news that Facebook is still allowing Neo-Nazi hatred to be ‘published’ even after Christchurch made matters worse. One could be forgiven for thinking it was these latest revelations that finally prompted a major change of Facebook policy on Wednesday.
A Facebook spokesperson had told me earlier this week that “we want Facebook to be a safe place and we will continue to invest in keeping harm, terrorism, and hate speech off the platform.” And following days of criticism, the company has now announced that white nationalist postings will be prohibited across its platforms, crediting three months of discussions with race relations experts rather than more recent events.
The “ban on praise, support and representation of white nationalism and separatism on Facebook and Instagram, which we’ll start enforcing next week,” has been brought about because “it’s clear that these concepts are deeply linked to organized hate groups and have no place on our services.”
Leading social media companies have to act because it is clear that lawmakers are fast losing patience. Australia has become the first country post-Christchurch to threaten to jail social media executives who cannot control their platforms. “If social media companies fail to demonstrate a willingness to immediately institute changes to prevent the use of their platforms,” Prime Minister Scott Morrison said on Tuesday, “like what was filmed and shared by the perpetrators of the terrible offenses in Christchurch, we will take action.”
And so has the tipping point been reached?
Despite Facebook’s latest concession, content is one thing but live video is quite another. It was the live streaming of the Christchurch attack that prompted the greatest criticism of Facebook and YouTube, and the challenge is that the platforms can’t control the sheer scale and immediacy of this kind of content. A repeat of Christchurch would yield the same inability to control events. Nothing in that regard has changed. Facebook’s admission that the company could not control Facebook Live has not yet been addressed, and the future of live streaming on the platform must still be in doubt.
On Monday, the French Council of the Muslim Faith (CFCM) announced that they will take legal action against Facebook and YouTube for inciting violence by live streaming footage from Christchurch. The accusation is that the companies disseminate material that incites terrorism and degrades human dignity. The Federation of Islamic Associations of New Zealand (FIANZ) welcomed this action. “They have failed big time, this was a person who was looking for an audience,” a spokesperson said referring to Facebook, “you were the platform he chose to advertise himself and his heinous crime.”
And then came that news that Australia is considering criminal charges leading to potential jail time for social media execs who fail to control what is streamed on their platforms. Prime Minister Morrison met with the leading social media firms on Tuesday, including Facebook, Twitter and Google, to ask for comfort as to how they would prevent their platforms and services being ‘weaponized’ by terrorists.
If the companies “can get an ad to you in half a second,” Morrison told reporters before meeting, “they should be able to pull down this sort of terrorist material and other types of very dangerous material in the same sort of time frame and apply their great capacities to the real challenges to keep Australians safe.”
Australia’s Attorney-General Christian Porter described Tuesday’s meeting as “thoroughly underwhelming”, and said that the government was “absolutely considering” jailing executives as a sanction and that Australia’s “extra-territorial reach” disregarded where any of those companies might be based.
Cue Microsoft, and the company’s stark warning to social media at an event in Australia on Wednesday. “The days of thinking about these platforms as being akin to the postal service with no responsibility, even legally, for what is inside a letter – I think those days are gone,” Brad Smith said. “In the world of social media, you would never see [some of the content shared] pass muster as a radio station or a television network because they are just almost exclusively devoted to spewing hatred.”
Facebook was approached for any comments on this latest news, with no response at the time of publishing.
The bubble bursts
In the last few days, the calls for social media regulation have moved from sidebar headlines to the mainstream. It is inevitable now that further significant change will come, and criticism of the self-regulated social media bubble cannot continue to be batted away by execs focused only on user growth and share price.
With Facebook’s ban on far-right postings, live streaming will now become the battleground. With the hypothesis that it’s damaging to the public interest to provide a broadcast platform for extremists, for murderers, for the vulnerable, for the suicidal, and where that platform can’t be controlled, there is no public interest case for leaving as is.
All roads still lead to regulation, but the pace is accelerating. To emphasize the point, there is the timing of Facebook’s blog post on the same day as Australian political pressure revved up, signaling a major – albeit long overdue – company policy shift with the “ban on praise, support and representation of white nationalism and separatism on Facebook and Instagram,” a change of stance for the company, which had differentiated between white supremacist and white nationalist content.
“We didn’t originally apply the same rationale to expressions of white nationalism and separatism,” they explained, “because we were thinking about broader concepts of nationalism and separatism — things like American pride and Basque separatism, which are an important part of people’s identity.” But now it’s clear, they acknowledged, “that these concepts are deeply linked to organized hate groups and have no place on our services.”
New Zealand PM Jacinda Ardern welcomed Facebook’s decision “in the wake of the attack in Christchurch”, but made the point to reporters that “arguably these categories should always fall within the community guidelines of hate speech.”
Facebook itself credited discussions with “members of civil society and academics who are experts in race relations around the world” for the change, citing months of engagement rather than a reaction to events in New Zealand. They said that experts had “confirmed that white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups… while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism.”
From next week, “people searching for these terms will be directed to Life After Hate, an organization, founded by former violent extremists, that provides crisis intervention, education, support groups and outreach”.
And so it begins…