REPORT: Facebook and WeChat—with more than 2 billion installs between them—are shipping with malicious security vulnerabilities onboard.
First Amendment confusion has negatively affected the national dialogue about the role of social media and has raised a series of imprudent proposals.
Facebook and Twitter said they both removed several fake accounts tied to a state-backed campaign to spread disinformation about pro-democracy protesters in Hong Kong.
The exposé on the inhuman working conditions of Facebook’s content moderators sheds light once again on the depravity of the human condition.
As ever, policy change at Facebook comes when pressure is applied – that pressure is certainly now being applied. And so, from now on, Facebook explained, “someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.”
In specific narrow domains like terrorism, companies have adopted blacklists of previously identified material, but in terms of proactively preventing new illegal and harmful content from being posted in the first place, the companies have largely struggled.
From Facebook’s standpoint, moving its content moderation to users’ own devices will allow it to continue enforcing its acceptable speech regulations even as user content is increasingly encrypted and takes the form of user-to-user private communications rather than public posts.
On Thursday, Facebook responded to U.K. regulation and announced a permanent ban on all of the U.K.’s most prominent far-right groups.
According to security firm UpGuard, app developers exposed users’ data on public servers. In one case, Mexico-based company Cultura Colectiva had stored 540 million records weighing in at 146 gigabytes on Facebook users.
Facebook Chief Operating Officer Sheryl Sandberg announced that the social media giant is "exploring restrictions" for live videos after a gunman streamed a mass shooting inside a New Zealand mosque earlier this month.