Social media platforms face huge fines under UK’s new digital safety laws

Social media platforms face significant fines if they fail to implement robust measures in the UK to tackle illegal content, including fraud, terrorism, and child sexual abuse material, under new digital safety laws.

Tech companies must implement safeguards that take action against illegal harms such as encouraging suicide, extreme pornography and selling drugs.

From Monday, every site and app within the scope of the Online Safety Act, which covers more than 100,000 services from Facebook, Google and X to Reddit and OnlyFans, will be required to take steps to stop such content appearing or to take it down if it goes online.

The technology secretary, Peter Kyle, said the illegal content crackdown was “just the beginning”.

“In recent years, tech companies have treated safety as an afterthought. That changes today,” he said.

Companies that breach the act face fines of up to £18m or 10% of worldwide revenue, which in the case of those such as Facebook’s owner, Meta, or Google would equate to billions of pounds. In extreme cases, services can also be taken down.

Ofcom, the UK watchdog overseeing the act, has published codes of conduct for tech platforms to follow in order to avoid breaching the legislation. The act lists 130 “priority offences”, or illegal content, that tech companies must tackle as a priority by ensuring their moderation systems are geared to deal with such material.

The codes of conduct include: hiding children’s online profiles and locations by default from users they do not know; introducing measures that allow women to block and mute users who are harassing or stalking them; establishing a reporting channel for organisations that can help deal with online fraud cases; and using “hash matching” technology, which is used to identify illegal images, to prevent sharing of terrorist content and non-consensual intimate images, or “revenge porn”.

Last year Ofcom warned that tech companies had a “job of work” to do in order to comply with the act and had yet to introduce all the measures needed to protect children and adults from harmful content. Speaking to the Guardian in December, Jon Higham, Ofcom’s online safety policy director, said many of the safety measures recommended by the watchdog were not being implemented by the largest and riskiest platforms.

“We don’t think any of them are doing all of the measures,” he said.

Mark Jones, a partner at the law firm Payne Hicks Beach, said the new illegal harms measures marked a “considerable sea change” in dealing with illegal or harmful content because they required tech companies to be proactive in identifying and removing dangerous material.

The Online Safety Act has been the subject of criticism by the US vice-president, JD Vance, who said last month that free speech in the UK was “in retreat”. However, Kyle has insisted the act will not be a bargaining chip in any negotiations with the Trump administration over the threat of tariffs being imposed on British exports to the US.

“Our online safety standards are not up for negotiation. They are on statute and they will remain,” Kyle told LBC radio last week. The British government’s view, which Keir Starmer reiterated in Washington last month, is that the act is about tackling criminality, not censoring debate.

This post was originally published on this site

Share it :