As ever, policy change at Facebook comes when pressure is applied – that pressure is certainly now being applied. And so, from now on, Facebook explained, “someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.”
In specific narrow domains like terrorism, companies have adopted blacklists of previously identified material, but in terms of proactively preventing new illegal and harmful content from being posted in the first place, the companies have largely struggled.
From Facebook’s standpoint, moving its content moderation to users’ own devices will allow it to continue enforcing its acceptable speech regulations even as user content is increasingly encrypted and takes the form of user-to-user private communications rather than public posts.
The public and private sector are turning to AI and automation in their fight against cyber-attacks. However, this strategy must not be at people's expense.
A recent survey indicated that only 16% of business leaders surveyed are getting significant value from advanced artificial intelligence (AI) in their companies.
Google’s refusal to develop AI capabilities for the U.S. military is a slap in the face to the heroes of the greatest generation.
Getting ready for a new world: There are two critical issues that will affect security over the next 20 years, artificial intelligence and bio-engineering.
Social media has a terrorism problem. Is using AI to counter it the best plan? Or would a human be better placed to decide what content is problematic?
IBM's latest weapon in the war against cybercrime is Watson, its star pupil famous for beating out the world’s best human Jeopardy champions and taking home a $1 million prize from the popular TV game show in 2011