Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

UK Government’s Digital ID System Could Grant Police Access to Facial Recognition Database
Favicon 
reclaimthenet.org

UK Government’s Digital ID System Could Grant Police Access to Facial Recognition Database

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The British government is promising a smoother, more modern state. Paperwork trimmed, services faster, identity checks handled with a few taps instead of folders stuffed with documents. It is a tidy vision of digital convenience, presented as practical and overdue. Yet tucked inside the policy details is a provision that moves the tone considerably. The proposed digital ID system could, under future legislation, allow police to access facial recognition data drawn from millions of identity photographs submitted by the public. The government has acknowledged that the new digital ID framework will be subject to “any new legal framework introduced” following a separate consultation on law enforcement use of facial recognition technology. That consultation, which closed in February 2026, considered authorizing police to run facial recognition searches against government databases. The policies suggest that a system introduced for administrative ease could eventually become part of the country’s policing infrastructure. Cabinet Office minister Darren Jones told reporters that “none of that is true” when asked whether police could access digital ID photographs for facial recognition searches. The consultation document, which its own government published the day before, says otherwise. The text explicitly acknowledges that the digital ID system will be subject to “any new legal framework introduced” following the government’s facial recognition consultation, which proposed authorizing police use of facial recognition against government records and databases. Jones didn’t clarify what part of that he considers untrue. He didn’t address the specific clause. He offered a flat denial while the evidence sat in a document bearing his government’s name, published 24 hours earlier. Either Jones hadn’t read the consultation he was sent to defend, or he had read it and decided denial was the better strategy. Neither possibility is reassuring. To be fair to Jones, he is new to the role of overseeing the digital ID project after his predecessor, Josh Simons, resigned after he was accused of a campaign to silence critical journalists. When the scheme was unveiled, ministers emphasized efficiency and accessibility. The digitization of services, they argued, would reduce costs and make systems easier to use. Questions quickly followed about whether the photo database might become a biometric search tool for law enforcement. One senior official responded: “The digital ID system that we’re building is not a mandatory ID that you need to have available to show to the police or anybody else.” The statement addresses one fear, that citizens might be required to present identification on demand (yet). It does not fully answer another concern, whether images submitted voluntarily could later be analyzed by facial recognition systems without direct involvement from the individual. Technical architecture alone cannot determine how information is used. Legal frameworks ultimately shape who can access data and for what purpose. If future legislation permits law enforcement access to biometric information, system design may offer limited protection. Another dimension of the debate involves how optional the system will feel in daily life. The government intends to make digital right-to-work checks mandatory before the end of the current Parliament. While officials stepped back from requiring a single government app, some form of state-issued digital identification will still be needed, whether through the new system, an e-visa, or an e-passport. This narrows the scope for opting out. When employment verification depends on digital credentials, participation becomes closely tied to ordinary economic life. Public reaction has been mixed, particularly online, where official announcements have drawn substantial criticism. Media reports also highlighted an awkward moment during a live demonstration of the system’s beta app, when technical difficulties interrupted the presentation. Though minor, the episode fed broader doubts about readiness and execution. The whole thing is being rushed, and people are asking why. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Government’s Digital ID System Could Grant Police Access to Facial Recognition Database appeared first on Reclaim The Net.

UK Lords Back Facial Recognition Overreach, Protest Crackdown Powers
Favicon 
reclaimthenet.org

UK Lords Back Facial Recognition Overreach, Protest Crackdown Powers

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The UK Lords spent March 9 dismantling what little legal cover existed for anonymous protest and privacy, and building new tools to suppress it entirely. Start with what they refused to protect. Peers voted down an amendment that would have kept the DVLA database (the equivalent of the DMV in the US) out of live facial recognition searches. That database isn’t a surveillance archive. It was built to verify driving licenses. It contains photographs linked to the confirmed real-world identities of most UK drivers, and the Lords just cleared the path for police to run it against faces captured in real time at public gatherings. A licensing bureaucracy would become an identification engine. The repurposing happened quietly, through a vote most people won’t read about. The Lords also voted down a proposed “defence of reasonable excuse” for concealing identity at protests. The amendment would have shifted the burden of proof onto police officers to justify why a face covering made someone arrestable. It failed 172 to 88. That means wearing a mask at a protest carries no legal defense, even if your reason is documented, principled, and directly tied to avoiding government surveillance. Then, on the same day, peers approved new Home Secretary powers to designate organizations as “Extreme Criminal Protest Groups,” passing 200 to 162. The designation criminalizes membership, promotion, fundraising, and providing any form of support to a designated group. No court makes that call. The Home Secretary does. Read the three votes together, and the shape becomes clear. The Lords rejected a right to shield your face from surveillance, rejected a legal defense for trying, and handed ministers a new tool to criminalize the groups most likely to show up at protests in the first place. Each vote was taken separately, but the combined effect is a surveillance and suppression framework that will outlast every minister who voted for it. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Lords Back Facial Recognition Overreach, Protest Crackdown Powers appeared first on Reclaim The Net.

UK Parliament Plans ISP Blocking and Age Verification Powers
Favicon 
reclaimthenet.org

UK Parliament Plans ISP Blocking and Age Verification Powers

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. If you wanted a case study in how modern democracies widen state oversight step by step, Britain has offered a clear example. On March 9, two major surveillance-related bills advanced through Parliament, each pointing toward broader government authority, reduced personal privacy, and tighter limits on protest activity. These measures advanced through procedural votes and technical amendments that sounded administrative, yet carry consequences for how millions of people use the internet and exercise civic rights. The main legislative action unfolded in the House of Commons during debate on the Children’s Wellbeing and Schools Bill. Members of Parliament actually rejected amendments from the House of Lords that would have required age verification for VPNs and certain user-to-user services. But don’t get too excited. Replacement amendments approved by MPs would grant significant new authority to the state. The powers allow the government to require internet service providers to block or restrict children’s access to specific online platforms, impose time-of-day limits on when services can be used, and mandate age verification across nearly any platform that enables users to post or share content. Because the legal definition of user-to-user services includes social media platforms, messaging applications, online forums, and gaming networks, the scope of these rules extends across much of the modern internet. This is as bad as it gets. The practical challenges are considerable and the privacy issues are even worse. Internet service providers supply connections to households rather than individuals. Enforcing child-specific restrictions would require identifying which devices belong to minors through ID verification and applying controls selectively, a level of precision that home broadband systems were never designed to provide. Enforcement may therefore produce household-wide restrictions or increased pressure on platforms to verify the age of all users. The amendments now return to the House of Lords. Approval there would send the bill to Royal Assent. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Parliament Plans ISP Blocking and Age Verification Powers appeared first on Reclaim The Net.

BC Tribunal Clears Student of “Hate Speech” Charge Over COVID-19 Video
Favicon 
reclaimthenet.org

BC Tribunal Clears Student of “Hate Speech” Charge Over COVID-19 Video

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A British Columbia tribunal has ruled that sharing a COVID-19 video in a student chat group is not hate speech, and the student who posted it did not discriminate against anyone. Adjudicator Ijeamaka Anika described it as “offensive” to some viewers. But the point is that offensiveness is not the same as illegal, and a government tribunal just confirmed that distinction holds. The 21-minute video, posted in 2020 in a University of Northern British Columbia student union chat, argued the pandemic was engineered to impose “technocratic and totalitarian government worldwide.” It referenced vaccines, Bill Gates, the World Economic Forum, and Western leaders. It was standard pandemic-era content. The kind of thing millions of people were sharing in private group chats that year, and still to this day. The complaint, as reported by Blacklock’s, came from the Chinese-Canadian executive director of the Northern B.C. The Graduate Students’ Society, which argued that the video targeted Chinese people. The Tribunal looked at what the video actually contained. China and Maoism appeared twice. “Specifically, it references Maoism in the context of discussing global governance models,” Anika wrote. “However, the vast majority of the video focuses on other actors and theories.” No Chinese individuals or leaders were named as perpetrators. The complaint did not survive contact with the facts. So-called “hate speech” under the Human Rights Code requires that content “expose or tend to expose any person or class of persons to detestation and vilification” in the view of a reasonable person. That is a real legal threshold, not a vague sense that something was upsetting or poorly reasoned. The Tribunal also found that Code violations require calls for specific discriminatory effects, not general commentary about global governance or pandemic origins. This is important beyond one student’s cleared name. The complaint was filed over a video shared in a small private student chat. It wasn’t a broadcast or public campaign. The idea that human rights law reaches into private online conversations to adjudicate the political content of what members share with each other is worth examining. The federal government had issued guidelines during the pandemic urging Canadians not to blame any ethnic group for COVID-19. Those guidelines were public health messaging. They were not law, and they did not transform every pandemic video into an act of discrimination. The student who posted the video into a small group chat was not the Chinese-Canadian executive director’s problem. The chilling effect of the opposite ruling would have been real. If forwarding a “conspiracy” video in a student chat exposes you to a human rights complaint and an adjudication process, people stop sharing. They stop talking. They second-guess every forward, every link, every political video with the wrong kind of content. That self-censorship happens quietly, before any order is ever issued, and it spreads well beyond the original case. The Tribunal got this right. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post BC Tribunal Clears Student of “Hate Speech” Charge Over COVID-19 Video appeared first on Reclaim The Net.

New Zealand Parliamentary Committee Recommends Social Media Ban for Under-16s
Favicon 
reclaimthenet.org

New Zealand Parliamentary Committee Recommends Social Media Ban for Under-16s

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A New Zealand parliamentary committee has concluded that social media platforms should be off-limits to anyone under 16, recommending a system of age assurance that would reshape how young New Zealanders access the internet. The recommendation is one of 12 in a 46-page report from the Education and Workforce Committee, which examined online “harms” ranging from algorithmic manipulation to deepfake pornography. The committee’s bottom line: “harm to young New Zealanders from online platforms is severe and requires urgent responses from Government, business, and society alike.” What the report doesn’t settle is how age checks would actually work, and how the infrastructure needed to run them would create new problems while solving old ones. The full list of recommendations covers a lot of ground. The committee wants a new independent online safety regulator, stronger liability for platforms that host “harmful” content, and mandatory algorithmic transparency. It wants bans on so-called “nudification” apps and the creation or distribution of non-consensual deepfake pornography. Alcohol, tobacco, and gambling advertising online should face tighter restrictions. Education campaigns targeting parents and young people are also on the list. The government’s response is due by June 3. Not everyone on the committee agreed. ACT New Zealand, a junior coalition partner, opposed several of the core recommendations: the new regulator, the deepfake bans, regulation of algorithmic recommendation systems, and the push for algorithmic transparency. Both ACT and the Green Party broke from the majority on the age restriction specifically. ACT warned against responses “requiring the likes of digital ID for age verification,” framing the choice as identity documents or nothing. The Green Party’s objection ran along similar lines. The committee’s own report acknowledges that age assurance doesn’t have to mean identity verification. Biometric facial age estimation, which estimates a person’s age from a selfie without storing or linking identity data, is referenced as an alternative. But, we all know how age verification tools that are supposed to delete data after it’s processed, have actually stored that data for longer than they declared. Also, data can be intercepted and stolen before deletion. New Zealand’s Privacy Commissioner has already flagged skepticism about Australia’s age assurance approach, which the committee held up as a model alongside efforts in the EU and UK. Aligning with those frameworks makes political sense. It doesn’t automatically make them proportionate. The report names several unresolved problems. VPNs can circumvent local restrictions, and the committee punted that issue back to the government for further consideration. Defining which platforms count as social media, and distinguishing them from “appropriately moderated forums,” remains a genuine challenge without a clear answer. Age assurance becomes identity surveillance by another name. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post New Zealand Parliamentary Committee Recommends Social Media Ban for Under-16s appeared first on Reclaim The Net.