The exposé on the inhuman working conditions of Facebook’s content moderators sheds light once again on the depravity of the human condition.
As ever, policy change at Facebook comes when pressure is applied – that pressure is certainly now being applied. And so, from now on, Facebook explained, “someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.”
In specific narrow domains like terrorism, companies have adopted blacklists of previously identified material, but in terms of proactively preventing new illegal and harmful content from being posted in the first place, the companies have largely struggled.
From Facebook’s standpoint, moving its content moderation to users’ own devices will allow it to continue enforcing its acceptable speech regulations even as user content is increasingly encrypted and takes the form of user-to-user private communications rather than public posts.
On Thursday, Facebook responded to U.K. regulation and announced a permanent ban on all of the U.K.’s most prominent far-right groups.
According to security firm UpGuard, app developers exposed users’ data on public servers. In one case, Mexico-based company Cultura Colectiva had stored 540 million records weighing in at 146 gigabytes on Facebook users.
Facebook Chief Operating Officer Sheryl Sandberg announced that the social media giant is "exploring restrictions" for live videos after a gunman streamed a mass shooting inside a New Zealand mosque earlier this month.
Following days of criticism, Facebook has now announced that white nationalism postings will be prohibited across its platforms.
Facebook announced on Tuesday it had identified and removed a significant number of pages, groups and accounts involved for “coordinated inauthentic behavior.”
Video still exists on the dark web of a person claiming to be the shooter and a link to the Facebook page that live-streamed the fatal attacks.