The UK government will on Thursday unveil a long-awaited set of measures to tackle a wide range of online harms, from bullying and fraud to child abuse, in an ambitious and contentious attempt to force Big Tech companies to police their networks.
Executives at world’s biggest technology companies such as Facebook-owner Meta and Google’s parent Alphabet could face jail sentences if they fail to comply with some elements of the new regime marshalled by Ofcom.
The media and telecoms regulator will also have the power to audit the algorithms that govern what consumers see in their search results and social media feeds, after hearing evidence from Facebook whistleblower Frances Haugen.
“We don’t give it a second’s thought when we buckle our seat belts to protect ourselves when driving. Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age,” said Nadine Dorries, culture secretary.
As the Conservative government attempts to balance freedom of expression with some of the toughest restrictions in the world on online abuse, it risks frustrating both the tech industry and safety campaigners as well as some Tory MPs.
Last month, former Brexit minister Lord David Frost was among a handful of MPs from the libertarian wing of the governing party who raised concerns over whether the right balance has been struck between free speech and protecting the most vulnerable to the dangers of the internet.
If the Online Safety Bill is approved by parliament, it could become law later this year. But the details of one of its most contentious elements — a requirement for the biggest internet platforms to police so-called “legal but harmful” abuses such as racism or bullying — will only be set out later through secondary legislation that requires less scrutiny from MPs than the original bill.
In an apparent concession to fierce opposition from tech companies to the government’s earlier proposals, Ofcom will not be able to mandate that “proactive” tools for content moderation are used on private messaging or legal content.
Large tech companies will have to carry out risk assessments on a range of issues specified as “priority legal harms”, then set out in their terms and conditions whether or not to allow them. Failing to remove content that companies say they will ban could ultimately lead to fines running up to 10 per cent of global annual turnover for each company.
Changes since the original draft include new duties to prevent online fraud perpetrated through paid adverts and the criminalisation of “cyber flashing”, whereby people expose themselves to strangers online.
Calls to ban anonymous users from major internet platforms altogether, which escalated in the wake of racist abuse to England football players last summer and the murder of MP David Amess in October, were ultimately rejected by the government. Instead, social media users will be given the ability to block every account that has not verified their offline identity.
Julian Knight, chair of the culture and media select committee, noted that the government had “listened” to concerns surrounding issues such as cyber flashing.
“We are particularly pleased that parliament and not tech companies will play the key and deciding role on what constitutes legal but harmful content,” he added.
However, the Open Rights Group, an online civil liberties campaigner, said the plan was tantamount to an “Orwellian censorship machine”.
Labour questioned the length of time taken to bring the bill to parliament — the legislation was originally proposed four years ago and a draft first published last May — adding that in the meantime, disinformation and other harms had been allowed to run rampant online.
“While we support the principles of the bill that is finally being published, delay up to this point has come with significant cost,” said Lucy Powell, the shadow culture secretary. “The big tech companies will not regulate themselves. The government must ensure the bill can tackle disinformation online.”