U.K. Regulates Facebook, Google And Twitter, Saying 'Clean Up Your Acts, Enough Is Enough'
By Zak Doffman
On Monday, the U.K. Government published proposals for “tough new measures to ensure the U.K. is the safest place in the world to be online,” claiming these to be the world’s “first online safety laws.” An independent regulator will be put in place with the “powers to take effective enforcement action against companies that have breached their statutory duty of care.” Such enforcement will include “substantial fines” as well as, potentially, the powers “to disrupt the business activities of a non-compliant company… to impose liability on individual members of senior management… and to block non-compliant services.”
Substantial fines, business restrictions, jailing execs – and the U.K. is not a lone voice. Just a few hours before the U.K. proposals were published, Facebook was branded “morally bankrupt pathological liars” by New Zealand’s Privacy Commissioner in the wake of their handling of last month’s attacks in Christchurch. And a week ago, Australia’s Government introduced legislation to fine or imprison social media execs who fail to prevent “the spread of abhorrent violent material online seriously” which “weaponizes” their platforms.
The race to regulate is now on. ”In the first online safety laws of their kind,” the U.K. Government said on issuing the proposals, “social media companies and tech firms will be legally required to protect their users and face tough penalties if they do not comply.”
The inevitable is here
It was in light of Christchurch and the international response that followed, that Facebook belatedly banned white hatred from its platforms. Coincidentally, the change of policy came hot on the heels of Australia’s threats to jail execs, including the inference that they may even pursue execs resident overseas. Until now, the social media giants have ridden out the storm of protests and criticism following increased scrutiny of the material ‘published’ by users on their sites. The irony is that most of the protests were aired on social media, the more the platforms are used, the more data they collect. And the more data they collect, the more money they make. This is not complicated.
The industry has raised the issue, and last month Mark Zuckerberg penned an op-ed in the Washington Post to argue that social media companies cannot and should not be held responsible for policing what can and cannot be published and shared. All well and good, but how to strike the balance between what the U.S. or U.K. Governments might say and what others might say. When the Singaporean Government came out with legislation to police content, there were immediate complaints that this was an impediment to free speech and could not be allowed. You can see the dilemma.
A spokesperson for Facebook in Asia responded to the moves by Singapore’s Government, saying that “aspects of the law that grant broad powers to the Singapore executive branch to compel us to remove content they deem to be false and proactively push a government notification to users. Giving people a place to express themselves freely and safely is important to us and we have a responsibility to handle any government request to remove alleged misinformation carefully and thoughtfully.”
U.K. Home Secretary, Sajid Javid was direct in his explanation of why regulation needs to carry such weight: “The tech giants and social media companies have a moral duty to protect the young people they profit from,” he wrote. “Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online. That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”
In essence, you’ve been warned and warned and warned. Enough is now enough.
There is a conflict inherent at the heart of social media, and the race to regulate pushes it out front and center. The companies’ business models rely on user data, the more content is published, shared and viewed, the more their metrics improve, and the more accurately they can target their users for advertisers. Want to target teenagers who self harm? No problem. Or how about people living in Western Europe who hate Jews? Yes, no problem, our algorithm can do that. It’s not expressed quite so bluntly, and it’s automated, but you get the point.
The shame of this is that the platforms are putting at risk some of the foundations of free speech on which they rely, by over-exploiting them. We choose to communicate over social media, but the level of that sharing has become unlimited. There are no borders or boundaries. There are very few impediments. And underpinning it all is the responsibility exemption social media has thus far enjoyed for the content published by their users. It’s the end to that exemption, with the caveat that this is about removing content rather than preventing it, that’s at the heart of regulatory moves.
Will this work?
Announcing the proposed legislation, Prime Minister Theresa May wrote: “For too long these companies have not done enough to protect users, especially children and young people, from harmful content. That is not good enough, and it is time to do things differently. We have listened to campaigners and parents, and are putting a legal duty of care on internet companies to keep people safe. Online companies must start taking responsibility for their platforms and help restore public trust in this technology.”
The U.K. Government has highlighted terrorism and child safety as the cornerstones of its proposed legislation: “Reflecting the threat to national security or the physical safety of children,” their announcement explained, “the government will have the power to direct the regulator in relation to codes of practice on terrorist activity or child sexual exploitation and abuse.”
In his op-ed last month, Mark Zuckerberg called for “a more active role for governments and regulators, by updating the rules for the Internet, we can preserve what’s best about it — the freedom for people to express themselves and for entrepreneurs to build new things — while also protecting society from broader harms.”
And that is what he now has. It’s coming thick and fast. The two questions that remain unanswered, though, are whether any government bar the United States can actually go head to head with Big Tech to change behaviors and win. And whether governments around the world can strike the right balance between freedom of speech and online safety. Assuming, in the main, they want to. Social media has brought this on itself. It is inevitable, but that was not always the case.
There are no easy answers. The U.K. Government’s 12-week consultation period starts now. But it will take a lot longer than that to work all this through.