Facebook and the other social media sites did not create the challenges we face, or the political polarization which now roils our society. But theirs is the field on which these issues are being played out.
If you’ve spotted any cartoons mocking President Trump’s response to the outbreak of the COVID-19 pandemic on Facebook or Instagram of late, it may have been the work of a group closely associated with the Iranian government.
REPORT: Facebook and WeChat—with more than 2 billion installs between them—are shipping with malicious security vulnerabilities onboard.
First Amendment confusion has negatively affected the national dialogue about the role of social media and has raised a series of imprudent proposals.
Facebook and Twitter said they both removed several fake accounts tied to a state-backed campaign to spread disinformation about pro-democracy protesters in Hong Kong.
The exposé on the inhuman working conditions of Facebook’s content moderators sheds light once again on the depravity of the human condition.
As ever, policy change at Facebook comes when pressure is applied – that pressure is certainly now being applied. And so, from now on, Facebook explained, “someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.”
In specific narrow domains like terrorism, companies have adopted blacklists of previously identified material, but in terms of proactively preventing new illegal and harmful content from being posted in the first place, the companies have largely struggled.
From Facebook’s standpoint, moving its content moderation to users’ own devices will allow it to continue enforcing its acceptable speech regulations even as user content is increasingly encrypted and takes the form of user-to-user private communications rather than public posts.
On Thursday, Facebook responded to U.K. regulation and announced a permanent ban on all of the U.K.’s most prominent far-right groups.