
On 25 July, Ofcom will begin to enforce a major part of the Online Safety Act, requiring digital service providers operating in the UK to conduct risk assessments and prevent children from viewing any “harmful” or adult content on their platforms. Those that fail to meet the regulator’s standards will incur hefty fines up to £18m or 10% of global turnover, whichever is greater. In the most egregious cases, executives or managers may face personal liability.
The Online Safety Act, which took years to wind through parliament, is steeped in controversy. Advocates say it will improve safety for children online, but critics have warned that its scope is too broad, targeting not only illegal and pornographic content but also so-called priority offences such as “foreign interference” and psychoactive substances.
Since March, Ofcom has wielded its enforcement powers against online services hosting illegal content. A month later, it finalised its so-called Children’s Codes, which require digital services to amend their algorithms to focus on harm reduction. Now, age checks will be mandatory on any platform hosting adult content.
For digital service providers, ensuring compliance is proving complicated. While firms must satisfy the regulator’s demands or face penalties, they are keen to protect themselves from possible overreach – but are increasingly uncertain of how to do so.
The next stage of the Online Safety Act
This week, digital businesses introduced age checks in the UK for websites and social media that might host adult or harmful content, whether or not that is their primary purpose. Platforms such as Reddit are requiring users to either upload an official ID or a photograph of their face, which is used to estimate their age.
The compliance burden for businesses will vary, says Jonathan Wright, a partner at Hunton Andrews Kurth, a legal firm. Large platforms designated as ‘category one’ services face stringent reporting requirements and must give users greater control over the content they see and engage with. Smaller service providers are spared these obligations, but must still conduct risk assessments, implement proportionate safety measures and establish processes to address user complaints and take down content.
For any services that fall under the scope of the Online Safety Act, the compliance requirements are potentially onerous, particularly for platforms that rely on algorithmically driven content feeds or host user-generated content.
Compliance confusion and shifting goalposts
Kevin Quirk, company director at AI Bridge Solutions, an AI advisory and development practice, says the Online Safety Act has presented challenges for his firm, which works with digital platforms around the world.
Preparing for the act is complicated and costly, he explains. Any digital platform accessible in the UK must create additional layers of moderation. They must also conduct risk assessments and ensure their services are auditable. Some digital service providers may even need to establish a designated safety officer role for UK users.
Quirk adds that the cost of compliance has been burdensome for his startup and SME clients. “We’re seeing delays in deployment timelines while legal teams reassess features,” he says. “Others have asked us to rebuild or modify their platforms to better align with safety-by-design principles. In some cases, UK-based businesses might consider launching products under a non-UK domain to sidestep the uncertainty the legislation introduces.”
Although he agrees the act is well-intentioned, Quirk believes the UK must offer further assistance to SMEs and startups or risk signalling that the country is no longer an easy place to innovate in AI and web platform design.
“The act is not just a concern for big tech,” adds Wright. “Many mid-size and smaller businesses will need to navigate overlapping regimes for data protection, age verification and content moderation. The risks are potentially reputational, operational and legal.”
And additional complexity may be still to come. As organisations have worked to understand whether they are in scope – and if so, how to meet compliance standards – Ofcom has launched a further consultation on additional safety measures. According to Laura Harper, a partner at Lewis Silkin, a legal firm, more compliance obligations are likely on the horizon.
The problem with digital IDs
Whether the act increases online safety remains to be seen. What’s almost certain, however, is that it will be a boon for digital ID companies, as businesses around the world seek to ensure their services are accessible to UK citizens.
Rather than uploading official IDs, privacy-conscious consumers will most likely opt for using VPNs, which allow internet users to circumvent location-based checks. But, for those who do not, websites and social media companies may become unwilling custodians of very sensitive data.
Jason Nurse, a cyber expert at the University of Kent, commends “the act’s clear focus on protecting children and other vulnerable individuals” but is very concerned about the use of digital ID services to run age checks for adult content.
“These sites will be entrusted with storing large amounts of personally identifiable information from potentially vast segments of the population. How can we be confident this data won’t be misused?” he asks. “Such centralised databases create attractive targets for attackers seeking information for blackmail, extortion or other malicious purposes, particularly if individuals wish to keep their access to certain content or websites private.”
From passive to active content management
Despite these challenges, businesses must be seen to be taking action. The legislation has moved the dial from passive hosting to active accountability, says Euan Duncan, a partner at MFMac’s media, manufacturing and tech team. There’s no room in the legislation for taking a wait-and-see approach to content risk.
“Businesses must assess and actively manage risks linked to user-generated content,” he says. “This doesn’t mean moderating every post, but it does require clear, documented systems to detect, report and respond to harmful content.”
Best efforts, adds Nurse, will not cut it. Firms must be able to demonstrate how their protocols increase the safety of their digital services. “This could be a significant and costly undertaking,” Nurse says. “They will have to make clear, tangible steps towards implementing appropriate safeguards that are based on the harms people may face on their platforms.”
Given the potential for ID-linked data breaches, encroachment on personal privacy and stilted innovation in the UK, the government should consider whether the cure to online harms could cause harmful second-order effects.

On 25 July, Ofcom will begin to enforce a major part of the Online Safety Act, requiring digital service providers operating in the UK to conduct risk assessments and prevent children from viewing any "harmful" or adult content on their platforms. Those that fail to meet the regulator’s standards will incur hefty fines up to £18m or 10% of global turnover, whichever is greater. In the most egregious cases, executives or managers may face personal liability.
The Online Safety Act, which took years to wind through parliament, is steeped in controversy. Advocates say it will improve safety for children online, but critics have warned that its scope is too broad, targeting not only illegal and pornographic content but also so-called priority offences such as "foreign interference" and psychoactive substances.
Since March, Ofcom has wielded its enforcement powers against online services hosting illegal content. A month later, it finalised its so-called Children’s Codes, which require digital services to amend their algorithms to focus on harm reduction. Now, age checks will be mandatory on any platform hosting adult content.