Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

US Bill Mandates On-Device Age Verification
Favicon 
reclaimthenet.org

US Bill Mandates On-Device Age Verification

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A bill introduced by Representative Josh Gottheimer in the House on April 13 would require Apple, Google, and every other operating system vendor to verify the age of anyone setting up a new device in the United States. The legislation, H.R. 8250, travels under the friendlier name of the Parents Decide Act, and it is among the most aggressive surveillance mandates ever proposed for American consumer technology. We obtained a copy of the bill for you here. The press releases describing it lead with children. The text describes something much larger. To confirm a child is under 18, the system has to identify everyone else, too, and the bill builds the infrastructure to do exactly that. This is child safety as a delivery mechanism for mass identification. The pattern is familiar by now. A genuine harm gets named, a sympathetic victim gets centered, and the solution proposed reshapes the digital lives of three hundred million people who were not the problem. The Parents Decide Act follows that template with unusual precision. It takes the real suffering of real children and uses it to justify building a national identity layer underneath every device sold in the country, administered by two private companies, with the details to be filled in later. The mandate sits in Section 2(a)(1), which obligates providers to “Require any user of the operating system to provide the date of birth of the user” both to set up an account and to use the device at all. Adults included. There is no carve-out for grown users, no opt-out for people who simply want to turn on a phone without handing a date of birth to Apple or Google first. The age check is the entry fee for owning a computer. What happens to that data afterward gets handed off to the Federal Trade Commission to sort out later. A federal bill that mandates identification as a condition of using a general-purpose computing device represents something the United States has not previously had, which is a national ID requirement for turning on a device. Gottheimer framed the proposal at a Ridgewood news conference on April 2, standing outside the local YMCA with a coalition of allies. “With each passing day, the internet is becoming more and more treacherous for our kids. We’re not just talking about social media anymore — we’re talking about artificial intelligence and platforms that are shaping how our kids think, feel, and act, often without any real guardrails,” he said. His diagnosis of the current system is accurate enough. “Children are able to bypass age requirements by entering a different birthday and accessing apps without any real verification. Kids can bypass age requirements by simply typing in a different birthday. That’s it. That’s the system,” he said. The remedy he proposes just happens to require building new surveillance plumbing underneath every device sold in the country, and routing that plumbing through two of the largest companies on earth. The solution chosen is disproportionate to the problem, and disproportionate in a specific direction, which is the direction of less privacy and less anonymity for everyone. Section 2(a)(3) directs operating system providers to “Develop a system to allow an app developer to access any information as is necessary” to verify a user’s age. Translated out of legislative prose, Apple and Google become age brokers for the entire American app ecosystem. Every app that wants to check whether you are over 18, or over 13, or over 21, will be able to ping the operating system for an answer derived from the birth date you handed over at setup. The bill presents this as a convenience. It is a new data pipeline between the OS layer and every developer who plugs into it, and the bill spends remarkably little time explaining how that pipeline will be constrained. Free speech implications travel through that same pipeline. Once the operating system knows your age with verified certainty, it can tell any app to deliver, restrict, or withhold content accordingly. The bill’s supporters describe this as parental control. The infrastructure it builds is a content control system, running at the OS level, with Apple and Google as the gatekeepers of who sees what. The First Amendment has historically protected the right to read, watch, and speak without first presenting identification. This bill erodes that principle at its foundation. Once verified age becomes a standard signal flowing from the operating system to every app, the default assumption shifts. Users are no longer presumptively anonymous adults with full access to lawful content. They are identified subjects whose permissions are determined by the data Apple or Google holds about them. An age-verification layer built to block AI chatbots from minors is also capable of blocking journalism a state deems too violent, political commentary an administration deems too inflammatory, reporting on drugs or protest tactics, or any other subject a future regulator decides requires age gating. The infrastructure is neutral about content. It cares only that the user has been identified. Every future fight over what Americans are allowed to see online will start from a position where the identification layer already exists, and the only remaining question is who qualifies for access. That is a profound change in how speech works, and the bill enacts it while pointing at children. What the bill says about data protection is effectively a to-do list for the FTC. Section 2(d)(1)(B) tells the Commission it must eventually issue rules ensuring that birth dates are “collected in a secure manner to maintain the privacy of the user” and are “not stolen or breached.” Those are outcomes, not mechanisms. The legislation sets no retention limits, no minimization requirements, no restrictions on secondary uses, and no prohibition on linking age data to other identifiers Apple and Google already hold. It offers no guidance on how providers should verify the age of a parent or guardian beyond instructing the FTC to figure that out within 180 days of enactment. The entire architecture of the system is to be drawn up after the fact by regulators working under a safe-harbor provision that shields operating system providers from liability as long as they follow whatever rules eventually emerge. Congress is being asked to authorize a surveillance system it has not designed, whose operation it does not understand, and whose safeguards do not yet exist. The Parents Decide Act solves the self-reported-birthday problem by demanding something verifiable, which in practice means a government ID, a credit card, a biometric scan, or some combination. However, Gottheimer has not specified which. The bill does not either. It’s up to the FTC to decide. Operating system providers will, and the incentives point toward whatever is cheapest to deploy at scale. Facial analysis is cheap. ID uploads are cheap. What is expensive is building a verification system that does not also create a persistent, cross-referenced database of everyone who has ever activated a phone. The incentives run directly against user privacy, and the bill provides no meaningful counterweight. The bill also deputizes a duopoly. Requiring “operating system providers” to perform nationwide age verification is a requirement only two companies can easily satisfy in the mobile space, and a handful more across desktop and console platforms. Smaller OS developers, open-source projects, Linux distributions, custom Android forks, privacy-focused alternatives, all face a compliance burden designed around the assumption that the provider is a trillion-dollar firm with legal staff and biometric-scanning partnerships already in place. The safe harbor in Section 2(b) protects providers who follow the rules, but following the rules requires infrastructure only the incumbents can build. A law nominally aimed at tech companies entrenches the two tech companies most responsible for the status quo. Apple and Google become the mandatory identity checkpoints for every app developer in the country, which is a commercial position worth a great deal of money and a great deal of leverage. Any future competitor that wants to build a privacy-respecting operating system will discover the law has made that effectively illegal. There is also another change buried in the text. The definition of “operating system” in Section 2(g)(4) covers “software that supports the basic functions of a computer, mobile device, or any other general purpose computing device.” That language reaches well beyond phones and tablets. Laptops run operating systems. Desktop computers run operating systems. Gaming consoles, smart TVs, cars with infotainment software, and a growing catalog of ambient devices all qualify under a plain reading of the definition. The bill does not distinguish between the family iPad and the laptop a college student uses for coursework. Every device with an OS becomes a device that verifies age at setup, and by extension, a device that identifies its user at setup. The scope creep is built into the definitions. Gottheimer cited cases of teenagers allegedly harmed by AI chatbots and by algorithmically promoted content about self-harm. What the bill does with those harms is use them as justification for an identity system that applies to every user. The template is consistent: a child is hurt, legislation is drafted, the legislation reshapes the digital environment of everyone, child and adult, subject and bystander alike. Less invasive alternatives exist and have existed for years. Device-level parental controls already ship with iOS and Android. Family Sharing and Google Family Link already let parents configure age-appropriate restrictions. App stores already allow per-app age ratings. None of these require every user in the country to prove their age to Apple or Google when turning on a phone. The bill skips past those options in favor of a mandate that treats universal age verification as the baseline condition of device ownership. Protecting children does not require building any of this. The bill’s authors chose to build it anyway, and the choice tells you what the bill is actually for. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post US Bill Mandates On-Device Age Verification appeared first on Reclaim The Net.

Edmonton Police Turned Body Cameras Into Facial Recognition Surveillance Tools
Favicon 
reclaimthenet.org

Edmonton Police Turned Body Cameras Into Facial Recognition Surveillance Tools

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Dozens of Edmonton, Alberta, police officers spent December 2025 patrolling with body cameras that silently scanned every face within four meters, comparing captures against a watchlist of roughly 7,000 people. The cameras, manufactured by Axon Enterprise and powered by facial recognition from Corsight AI, ran automatically whenever an officer pressed record. No one being scanned was asked or told. Body cameras were sold to the public as accountability tools that watch police on behalf of citizens. Edmonton’s pilot inverts that promise. The same cameras now watch citizens on behalf of police. EPS’s own privacy assessment acknowledges this, stating that “the continuous scanning of faces for comparison against a watchlist constitutes proactive surveillance.” Newly obtained documents reveal that the privacy impact assessment EPS submitted to Alberta’s privacy watchdog contains troubling language around data sharing. The assessment says data shared with Axon will be anonymized “whenever possible,” but adds that “data required to aid in assessing the success or failures associated with the technology will be shared when / if required.” Gideon Christian, an associate professor of AI and law at the University of Calgary, called that phrasing dangerously vague. “‘Whenever possible’ is a very loose and ambiguous phrase,” he said. Kate Robertson, a senior research associate with the University of Toronto’s Citizen Lab, called this “likely the most high risk algorithmic surveillance program that I have observed to date in Canada.” A system outage caused by a “critical fault” prevented matches for seven days, and EPS requested a three-week extension to collect enough data for a potential second phase involving real-time officer notifications. Whether that extension was approved remains unknown. EPS refused to answer questions. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Edmonton Police Turned Body Cameras Into Facial Recognition Surveillance Tools appeared first on Reclaim The Net.

FTC Settlement: Ad Agencies Agree to Stop “Brand Safety” Collusion to Defund Media Outlets
Favicon 
reclaimthenet.org

FTC Settlement: Ad Agencies Agree to Stop “Brand Safety” Collusion to Defund Media Outlets

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Three of the world’s biggest advertising conglomerates have agreed to stop colluding to defund media outlets whose politics they didn’t like. The Federal Trade Commission and Texas Attorney General Ken Paxton, joined by seven other states, filed a complaint and simultaneous settlement against Dentsu US, GroupM Worldwide (WPP’s media-buying arm), and Publicis on April 15, accusing them of running what amounts to a coordinated censorship operation through the advertising supply chain. Starting in 2018, these agencies, which collectively control over $81 billion in ad-buying power, agreed to adopt identical “brand safety” standards that treated so-called “misinformation” as a category of content too dangerous for any advertiser to touch. They did this through two industry groups: the American Association of Advertising Agencies’ Advertiser Protection Bureau, and the World Federation of Advertisers’ Global Alliance for Responsible Media, better known as GARM. The result was a shared “Brand Safety Floor” that could starve publishers of revenue without any single company having to take public responsibility for the decision. One ad agency executive described the arrangement’s origins by saying, “the major holding companies came together under the 4As and agreed that brand safety is so important, that we must combine efforts, become one voice, and stop sending potential mixed signals.” The 4As vice president put it even more bluntly: “When it comes to brand and consumer safety, media agencies have to put competition aside.” Put competition aside. That looks like an antitrust violation described as a virtue. GARM operated under explicit secrecy. According to the complaint, GARM told the six largest global advertising holding companies that discussions about brand safety were governed by a principle: “The first rule of Fight Club is: You do not talk about Fight Club. The second rule of Fight Club is: You do not talk about Fight Club.” GARM leadership wanted “the agencies [to] speak as a single entity to describe how they’re tailoring plans and buys.” At a retrospective on GARM’s third anniversary, participants celebrated what they called “uncommon collaboration,” praising how the agencies came together to “collaborate not compete on safety.” The word “safety” is a misnomer. What they were actually collaborating on was a system to cut off ad revenue from publishers whose content fell below their agreed-upon standard for acceptable speech. And who got to define what was acceptable? Organizations like NewsGuard, the Global Disinformation Index, Check My Ads, and Media Matters for America. The complaint describes these groups as having “sought to elevate concerns within the digital advertising industry about what they viewed as ‘misinformation,’ in order to deprive certain sites of the digital ad revenue they needed to survive.” The Global Disinformation Index was founded because its creators believed the 2016 US presidential election and the Brexit referendum were caused by media disinformation, a problem they decided could be solved by going after those media companies’ advertisers. Check My Ads announced in 2022 that it was “launching the first effort to permanently block” conservative media figures like Charlie Kirk, Glenn Beck, and Steve Bannon “from the ad industry,” in an article titled “Here’s our plan to defund the insurrectionists.” Media Matters ran campaigns pressuring advertisers to pull spending from Fox News and later from Elon Musk’s X. The chilling effect of this arrangement went well beyond the individual publishers who lost revenue. When the three largest ad-buying agencies in the country all agree to use the same criteria for excluding websites, the definition of “brand safe” becomes industry-wide orthodoxy. Publishers who might have survived one agency’s disapproval couldn’t survive all of them acting in concert. News outlets, commentators, and social media platforms were the primary targets. A House Judiciary Committee report found that GARM discussed putting center-right outlets, including Breitbart News, Daily Wire, and Fox News on advertising exclusion lists. An internal GARM communication, quoted in earlier FTC proceedings, captured the thinking. John Montgomery, then-executive vice president of Global Brand Safety, wrote to GARM leader Rob Rakowitz: “There is an interesting parallel here with Breitbart. Before Breitbart crossed the line and started spouting blatant misinformation, we had long discussions about whether we should include them on our exclusion lists. As much as we hated their ideology and bullshit, we couldn’t really justify blocking them for misguided opinion. We watched them very carefully and it didn’t take long for them to cross the line.” FTC Chairman Andrew Ferguson framed the case in both antitrust and speech terms. “The ad agencies’ brand-safety conspiracy turned competition in the market for ad-buying services on its head,” he said. “The antitrust laws guarantee participation in a market free from conduct, such as economic boycotts, that distort the fundamental competitive pressures that promote lower prices, higher-quality products, and increased innovation.” Ferguson added that the collusion “deprived advertisers of the benefits of differentiated brand-safety standards that could be tailored to their unique advertising inventory.” He went further: “This unlawful collusion not only damaged our marketplace, but also distorted the marketplace of ideas by discriminating against speech and ideas that fell below the unlawfully agreed-upon floor.” Paxton called the scheme “an egregious attempt to control public opinion and silence those who speak out against the liberal elites and powerful corporations.” He added: “I will continue to lead the fight against viewpoint suppression and protect the speech of Americans from corrupt manipulation.” Under the proposed settlement, filed in US District Court for the Northern District of Texas, all three agencies must stop using exclusion lists and coordinated agreements to restrict ad spending based on political viewpoints or social commentary. They cannot enter into or enforce agreements that restrict business with media publishers based on political or social commentary content, and they cannot direct or limit ad spending based on political viewpoints, ideological viewpoints, or DEI commitments. A court-appointed monitor will oversee compliance. The settlements require court approval to take effect. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post FTC Settlement: Ad Agencies Agree to Stop “Brand Safety” Collusion to Defund Media Outlets appeared first on Reclaim The Net.

Tuta Announces Quantum-Resistant Encrypted Cloud Storage, Tuta Drive
Favicon 
reclaimthenet.org

Tuta Announces Quantum-Resistant Encrypted Cloud Storage, Tuta Drive

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Privacy company Tuta is launching an encrypted cloud storage service, and it comes with something most competitors can’t offer: encryption that’s designed to survive quantum computers. Tuta Drive enters early access today as an invite-only beta, built on the same hybrid cryptographic protocol the German company deployed in Tuta Mail back in early 2024. That protocol, TutaCrypt, pairs conventional algorithms with quantum-resistant ones, which means files uploaded to Tuta Drive are encrypted with math that current computers can’t break and future quantum machines shouldn’t be able to either. Every file gets encrypted on your device before it leaves. Tuta’s servers never see the unencrypted version. In a zero-knowledge architecture like this, even a government subpoena can’t produce readable files, because the company genuinely doesn’t have the keys. This is the product that has been under development for nearly three years. Tuta started the PQDrive research project in July 2023, working alongside the University of Wuppertal to build post-quantum encryption into a cloud storage system from the ground up. By early 2024, the cryptography was proven enough for email, making Tuta Mail the first provider worldwide to ship quantum-safe encryption by default. Now that same protocol extends to file storage. “With Tuta Drive, we are taking the next step towards offering a full private digital workspace,” said Arne Möhle, CEO of Tuta. “Today, more than ten million citizens and businesses, including journalists, whistleblowers and activists use Tuta Mail as an alternative to insecure email offered by mainstream providers. “Adding an encrypted cloud storage to Tuta will enable them to also store their files securely. This invite-only beta release accumulates all our efforts of the last years. In July 2023, we started an extensive research project with the goal to update the Tuta cryptography to a hybrid protocol with traditional and quantum-resistant algorithms. We achieved this in beginning of 2024, making Tuta Mail the first quantum-safe email provider worldwide. And today we are proud to announce that we are ready to add a Drive solution to Tuta that makes use of the same cryptography.” Intelligence agencies and sophisticated attackers are already harvesting encrypted data in bulk, banking on the assumption that quantum computers will eventually crack today’s encryption. It’s called “harvest now, decrypt later,” and it transforms every file you store in a conventional cloud service into a future liability. Your medical records, legal documents, financial statements, business plans, anything uploaded to Google Drive or Dropbox today sits behind encryption that a sufficiently powerful quantum computer could shred. The files don’t need to be interesting right now. They just need to still be sensitive in ten or fifteen years, which most of them will be. Google, Microsoft, and Dropbox don’t offer end-to-end encryption on their cloud storage by default. They encrypt files in transit and at rest, sure, but they hold the keys. That means they can read your files, law enforcement can compel them to hand files over in readable form, and a breach of their systems exposes actual content. The privacy promise amounts to trusting that they won’t look and that nobody else will successfully break in. It’s a bet that gets worse every year as quantum computing advances accelerate. Tuta Drive’s hybrid encryption sidesteps this entirely. The protocol combines CRYSTALS-Kyber (a NIST-standardized post-quantum key encapsulation mechanism) with elliptic curve cryptography, layered over AES-256 symmetric encryption. If someone breaks the quantum-safe algorithm, the conventional encryption still holds. If someone breaks the conventional encryption, the quantum-safe layer still holds. An attacker would need to defeat both simultaneously, which is the whole point of a hybrid approach. The beta is bare-bones for now. It works through the web interface on desktop and mobile, with native apps and a sync client coming later. Users can upload and store files, with sharing features planned. That’s not a lot of polish, but the encryption underneath is the part that actually matters, and Tuta has been hardening it for years across email, calendar, and contact data before extending it to file storage. Tuta is based in Germany, which means European data protection law applies. More meaningfully, the zero-knowledge architecture makes that jurisdiction question less important than it would be for a service that can actually read your data. When a provider holds no usable decryption keys, the legal framework governing data requests becomes somewhat academic. You don’t have to trust Tuta’s promises about privacy. You have to trust the math, which is open source and available for anyone to audit on GitHub. During the closed Tuta Drive beta, participants can test core functionality and submit feedback to shape what the final product looks like. Given how long the privacy community has waited for quantum-resistant cloud storage from a provider that isn’t headquartered in a Five Eyes country, the beta can’t come soon enough. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Tuta Announces Quantum-Resistant Encrypted Cloud Storage, Tuta Drive appeared first on Reclaim The Net.

UK Southport Inquiry Pushes Mass Surveillance and VPN Restrictions
Favicon 
reclaimthenet.org

UK Southport Inquiry Pushes Mass Surveillance and VPN Restrictions

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. On July 29 2024, a teenager walked into a children’s Taylor Swift-themed dance class in Southport, England, and murdered three young girls with a knife. He injured ten others. It was, by any measure, one of the most horrifying attacks on British soil in recent memory, and what followed should have been a reckoning with the catastrophic state failures that let it happen. Instead, the British government looked at the smoldering aftermath and decided the real enemy was the internet, and the solution just so happens to be the mass surveillance censorship proposals the government is already working on. After the attack, outrage on social media turned to protests. Protests became riots. And the state’s response landed with a speed and ferocity that it had never managed to direct at, say, the agencies that let a known danger walk free for years. A former childcarer named Lucy Connolly was jailed for 31 months for a single post on X. That is three months longer than the sentence given to a man who physically attacked a mosque during the same period of unrest. The UK was already a country where arrests for “offensive” social media posts had nearly doubled in seven years, climbing from 5,502 in 2017 to 12,183 in 2023. The overall conviction rate for those arrests was falling at the same time. Police were locking people up for what they typed at a rate that was going up, while the number of convictions that actually stuck was going down. The Southport riots became the accelerant. A House of Commons Home Affairs Committee report used the unrest to call for a “new national system for policing” with enhanced capabilities to surveil social media activity, framing public anger as a problem of online “misinformation” rather than a consequence of the state’s own failures. The state was dodging accountability by demanding censorship and surveillance and blaming the internet for unrest. And now, months later, Sir Adrian Fulford’s Southport Inquiry Phase 1 report has arrived, and it takes the whole dynamic further still. Not just further toward punishing people for what they say online, but toward watching everything they do online, and everything they buy offline, too. The report itself is 763 pages across two volumes, published on 13 April, with 67 recommendations. Its central finding is devastating. The attack “could have been and should have been prevented.” Multiple state agencies failed repeatedly to act on years of warning signs. The attacker’s parents bore “considerable blame” for not reporting Axel Rudakubana’s worsening behavior. Sir Adrian identified five areas of systematic failure, including critical breakdowns in information sharing and a repeated tendency to excuse the attacker’s behavior on the basis of his autism spectrum disorder. The factual record of those failures is staggering. The attacker was referred to the Prevent counter-terrorism program three times between 2019 and 2024, with each referral closed without sustained action. He purchased weapons, including three machetes, as well as ingredients to make the poison ricin. Police responded to five calls at the family home. And in March 2022, when the attacker was found on a bus with a knife, admitting he wanted to stab someone and thinking about poison, he was simply returned home with advice to hide the knives. The report said that had this incident been judged in light of the attacker’s past risk, he would have been arrested, and his possession of an al-Qaeda manual and ricin seeds would have come to light. You might think the resulting 67 recommendations would focus on making sure the people who are paid to protect children actually protect them. Some of them do. But a significant chunk has nothing to do with fixing the human laziness that ultimately killed three girls, and everything to do with building an internet surveillance apparatus that would make the average dystopian novelist blush. Recommendation 12 asks the government to “consider systems to detect and report concerning online behaviour and suspicious combinations of purchases.” It lists VPN use alongside name changes as behavioral red flags worth automated detection. The same recommendation wants reporting systems for “concerning purchases of dangerous but legal items (e.g., sledgehammers, bow and arrows and smoke grenades)” and “concerning combinations of purchases (e.g. castor beans, alcohol, and laboratory equipment).” Anyone who has ever renovated a kitchen, taken up archery as a hobby, or ordered laboratory glassware because they fancied making gin is now, apparently, a person of interest. Recommendation 24 goes after VPNs directly, asking Phase 2 to “consider age verification for the use of Virtual Private Network (VPN) software and other options to avoid VPNs being used to circumvent the age-related protections in the Online Safety Act 2023.” Recommendation 20 calls for “mandatory reporting and information-sharing about suspicious behaviour” around knife sales, alongside “strengthening online age-verification and age verified delivery standards” and “prohibiting some online sales.” Recommendation 19 tells Amazon to “improve its measures to prevent children from making purchases,” to “review its systems for recording details of the recipient to ensure that an accurate record of the recipient is obtained,” and to “audit its training of age verified deliveries for drivers, in particular for Amazon Flex drivers.” Amazon is being told to collect more data about everyone who receives a parcel. The company already uses “trusted ID verification services to check name, date of birth and address details whenever an order is placed for these bladed items” and has “an age verification on delivery process that requires drivers to verify the recipient’s age through an app on their devices.” Recommendation 22 tells Lancashire County Council to ensure frontline staff “have access to effective tools and guidance to identify and respond to” online risks, specifically naming “the risks associated with the use of Virtual Private Networks, which can enable children to bypass the safeguards established under the Online Safety Act 2023.” It asks the Department of Health and Social Care to consider whether “reforms to national guidance, policy or training are required.” Social workers are now expected to treat VPN use as a safeguarding red flag. The same tool, you will recall, that Parliament itself told its own members to install on their phones. Here is where the whole thing becomes genuinely absurd. VPN use in Britain exploded because the government’s own Online Safety Act censorship law forced it. When age verification rules took effect in July 2025, Proton VPN reported a sustained 1,800 percent increase in UK sign-ups. Five VPN apps hit Apple’s UK App Store top 10 within days. Millions of ordinary people downloaded privacy tools to avoid handing their biometric data to random websites as the government’s own rules demanded. And the government’s response to this entirely predictable mass adoption of privacy software is to propose restricting privacy software. The House of Lords voted in January to ban VPN use by under-18s, backing an amendment to the Children’s Wellbeing and Schools Bill by 207 votes to 159. Labour’s Lord Knight acknowledged that VPNs could “undermine the child safety gains of the Online Safety Act” but warned that age-gating them could be “extremely problematic.” He noted: “My phone uses a VPN, following a personal device cyber consultation offered by this Parliament. VPNs can make us more secure, and we should not rush to deprive children of that safety.” For now, MPs haven’t gone along with it. But the rejected proposals are only one implementation of such ideas. So Parliament tells its own members to use VPNs. Parliament then votes to ban children from using VPNs, which would require age checks and giving up privacy. And a public inquiry now wants social workers to flag VPN use as a risk indicator. Age verification amounts to requiring adults to give up their personal or biometric data to access lawful content. This is the throughline that connects Southport to the wider censorship machine. The government passes laws requiring identity verification to access legal content. People use privacy tools to avoid handing their identity to strangers. The government then classifies those privacy tools as suspicious. At each step, the scope of surveillance expands and the definition of “concerning behavior” gets broader, and at no point does anyone go back and fix the actual agencies that let a teenager with an al-Qaeda manual and ricin seeds, three machetes, and multiple Prevent referrals walk free for years. The rest of the surveillance proposals are not aimed at known threats. They are aimed at the whole population. They propose systems to track what you browse, what you buy, and whether you dare to use a VPN, then flag combinations that some algorithm decides look suspicious. The Southport Inquiry confirms what the arrest statistics, the sentencing disparities, and the legislative agenda already made obvious. Britain has developed a very specific institutional reflex. When its agencies fail catastrophically, the state responds by expanding surveillance of the general population. When the public expresses anger about those failures, the state responds by censoring the expression of that anger. The definition of “offensive” keeps expanding. And the people who actually had the information needed to prevent a massacre keep their jobs. What failed at Southport was not a lack of data. It was not the absence of purchase-tracking algorithms. It was not that VPNs exist. What failed was human beings in positions of authority who saw danger, documented it, filed the paperwork confirming they’d seen it, and then closed the case and went home. Building a national internet surveillance system won’t change that. Age-gating the privacy tools that Parliament recommends to its own members won’t change that. Nothing in this report’s surveillance wishlist addresses the reason three girls are dead, which is that the system already knew, and the system chose to do nothing. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Southport Inquiry Pushes Mass Surveillance and VPN Restrictions appeared first on Reclaim The Net.