On March 25, 2026, the UK Information Commissioner’s Office and Ofcom published a joint statement that most people in Britain never read. The statement formalized how platforms subject to the Online Safety Act must approach age assurance — which methods count, which don’t, and what “proportionate” and “highly effective” mean in practice when regulators start writing fines.
The statement is, on its face, a technical regulatory document. In practice, it is the official architecture specification for a nationwide identity-verification layer sitting on top of the British internet. And once that infrastructure exists — once millions of daily ID checks are routine, once verification providers are profitable, once government access to verification logs is normalized — it will be extraordinarily difficult to dismantle, regardless of which government is in power, regardless of what the original intent was.
That’s the problem with building surveillance infrastructure for good reasons. It doesn’t stay built for good reasons.
What the Law Actually Requires
The Online Safety Act 2023 requires platforms “likely to be accessed by children” to implement age assurance measures. For services hosting adult content or content categorized as high-risk, the requirement escalates: platforms must use “highly effective” age checks, not just proportionate ones.
What counts as highly effective? Per Ofcom guidance: ID document scanning (passport, driver’s license), credit card verification, mobile network operator checks using billing data, or biometric age estimation. What does not count: clicking a checkbox that says “I confirm I am over 18.” The era of the honor system on the British internet is formally over.
The rollout began in mid-2025. By early 2026, Ofcom had already issued fines exceeding £1 million against non-compliant platforms. The volume of daily checks has reached the millions. The infrastructure is not theoretical. It is running.
What Is Actually Being Built
Here is what a “highly effective” age check looks like in practice, from the user’s perspective. You navigate to a platform. You are prompted to verify your age. You upload a photo of your passport or driver’s license, or you enter your credit card details, or your mobile operator confirms your identity via your billing account. A verification provider processes your documents, confirms you are above the relevant age threshold, and allows you through.
From the infrastructure’s perspective, something different is happening. A private company now has a record that a specific, identified individual — linked to a passport number, a credit card, or a phone account — visited a specific website at a specific time. That record exists on the verification provider’s servers. The verification provider has privacy policies. It has security measures of varying quality. It has investors. It operates in a legal jurisdiction where governments can demand data. And it has, as any profitable company does, financial incentives to retain data rather than delete it.
The Open Rights Group has been making this argument since the Online Safety Bill was in draft: “This is dangerous age verification.” The danger is not that age verification doesn’t work for its stated purpose. The danger is that it works by creating exactly the kind of centralized identity-linked browsing record that authoritarian governments dream of and democratic governments promise will never exist. The UK is building it voluntarily, with good intentions, one ID scan at a time.
The State of Surveillance project put it more directly in a recent analysis: age verification “is building a mass surveillance system — one ID scan at a time.” That characterization has been dismissed as alarmist by government officials. It is worth asking what, precisely, the dismissal is based on. What technical or legal mechanism prevents the infrastructure being built today from being used tomorrow for purposes beyond its original scope?
The Function Creep Question
Britain has a relevant case study for this dynamic. CCTV cameras were introduced in the 1980s and 1990s in specific high-crime areas, justified as a targeted crime-reduction tool. Today the United Kingdom has one of the highest densities of CCTV cameras of any country in the world, covering town centers, transport networks, public spaces, and private commercial areas as a matter of routine. The cameras did not expand because of a specific decision to build a surveillance state. They expanded because the infrastructure existed, the cost of adding more fell over time, and each individual expansion seemed incremental and justified.
Digital identity verification follows the same logic. The infrastructure being built for age verification on adult content sites is the same infrastructure, technically and legally, that could verify identity before accessing political content, before participating in online forums, before purchasing certain products, or before accessing services the government of the day decides warrant identity confirmation. The technical architecture does not have a child-protection mode and a surveillance mode. It has one mode: identity-linked access.
The Joint ICO/Ofcom statement acknowledges this risk in a carefully hedged way, calling for age assurance methods that “minimise data collection” and urging verification providers to not retain data beyond immediate necessity. These are recommendations. They are not binding technical standards with audit requirements and meaningful penalties for non-compliance. The statement was published the same week Ofcom was fining platforms millions of pounds for failing to verify ages at all. The asymmetry of enforcement pressure is notable.
The VPN Response and Its Limits
The clearest market signal that something has gone wrong with a policy is when the people it’s supposed to help route around it. VPN usage in the United Kingdom has surged dramatically since age verification requirements began rolling out in mid-2025. British internet users are increasingly routing their traffic through servers in other jurisdictions, appearing to originate from Germany or the Netherlands or the United States, and accessing services as if the verification layer doesn’t exist.
This is rational individual behavior. It is also, depending on the specific service and jurisdiction, potentially illegal, and it does nothing to address the structural problem. VPNs work until they don’t — until ISPs are required to block known VPN endpoints, until VPN providers are required to implement their own identity verification for UK customers, or until using a VPN itself becomes a basis for heightened scrutiny. None of those outcomes are hypothetical; all have been implemented or discussed in other contexts.
The population that most needs anonymous internet access — abuse survivors, whistleblowers, journalists, people questioning their identity or sexuality in environments where discovery carries real risk — is not the population most capable of navigating VPN configuration and the ongoing legal ambiguity around circumvention. The people who can most easily adapt to the verification regime are the people least threatened by it. That’s the wrong distribution of protection.
The International Template Problem
The UK does not exist in a regulatory vacuum. The Online Safety Act is being watched closely by governments across the democratic world as a model for online safety legislation. Australia’s Online Safety Act, passed in 2021 and updated since, follows similar principles. The EU’s Digital Services Act and its age verification provisions have borrowed conceptually from the UK framework. US state-level legislation in Texas, Louisiana, and several other states has implemented or is implementing mandatory age verification for adult content — with similar infrastructure implications.
Whatever verification architecture the UK builds at scale will be the template other governments examine, adapt, and implement. If Britain normalizes identity-linked internet access as the mechanism for age verification, that normalization exports. The specific technical choices made by Ofcom and British platforms over the next twelve months will shape what the internet looks like for a generation of users well beyond the UK’s borders.
That is not an argument against protecting children online. It is an argument for being extremely precise about which mechanisms are used, what technical and legal safeguards constrain them, and whether the infrastructure being built is genuinely limited to its stated purpose or is instead a general-purpose identity layer waiting for the next use case.
What Users Can Do — And What They Can’t
Practically speaking, if you are a UK internet user, your options are narrowing. VPNs remain available and widely used; the legal position on circumventing age verification via VPN is ambiguous and varies by service. Services hosted outside UK jurisdiction that don’t serve the UK market are not covered by Ofcom enforcement. The Tor network provides stronger anonymity than commercial VPNs but at a significant cost in speed and usability.
The more important action is political. The Online Safety Act’s implementation is still being shaped — the secondary legislation, the specific codes of practice, the technical standards for verification providers. Civil society organizations including the Open Rights Group, the Electronic Frontier Foundation (which tracks these issues internationally), and the Index on Censorship are engaged in those policy processes. The window to influence how verification infrastructure is designed, what data retention rules apply, and what government access requires is closing but not closed.
The infrastructure is being built. The question is whether it’s built with genuine privacy constraints or without them. That question is still being answered.
Resources for Navigating Digital Surveillance and Identity Requirements
- Privacy guides and tools for managing your digital footprint under age verification and identity-linked access regimes: MyPrivacy.blog
- Regulatory framework guides for the UK Online Safety Act, GDPR, DSA, and age verification compliance obligations: ComplianceHub.wiki
- Data breach and incident tracking including verification provider security incidents: Breached.company
For organizations navigating Online Safety Act compliance, age assurance vendor selection, and privacy-by-design implementation for UK-facing platforms, CISO Marketplace provides assessment, vCISO consulting, and privacy program services.



