Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Ofcom and the Fantasy of Global Speech Control
Favicon 
reclaimthenet.org

Ofcom and the Fantasy of Global Speech Control

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Ofcom appears to believe that a website is a kind of television channel. This would explain a lot about what happened on Wednesday, when Britain’s speech regulator fined an American mental health and suicide discussion forum £950,000 ($1.3 million) for hosting speech that is legal in America, on servers in America, operated by Americans. The site had already blocked British visitors from accessing it, voluntarily, as a gesture of goodwill, despite having no legal obligation to do so and despite Ofcom having no jurisdiction to demand it. Ofcom fined it anyway. The fine is unenforceable. The site owes Ofcom nothing under American law. And even if the site had never blocked a single British visitor, Ofcom’s case would still make no sense, because a British regulator cannot fine an American citizen for legal American speech on an American server any more than the French postal service can fine you for what you write in your own diary. Ofcom is the Office of Communications, the British government’s speech regulator. Americans don’t really have an equivalent because most Americans would never stand for one. The closest thing is the FCC, except imagine the FCC could also decide what you’re allowed to say on the internet and fine you if it disapproves. Under the notorious Online Safety Act, passed in 2023, Ofcom gained the power to decide what speech is permissible online and to fine platforms that host speech the UK government doesn’t like. That includes speech that is perfectly legal everywhere else on earth. It is, when you think about it for more than four seconds, absolutely mad. Ofcom launched on December 29, 2003, stitched together from five separate regulators: the Broadcasting Standards Commission, the Independent Television Commission, the Office of Telecommunications, the Radio Authority, and the Radiocommunications Agency. They all dealt with broadcasting, telecoms, or spectrum. They regulated transmitters, phone lines, and radio frequencies, all of which used publicly owned spectrum and publicly funded infrastructure to push content into British living rooms. The airwaves belonged to the public. The transmitters were built with public money. If you were using national resources to broadcast to a national audience, it made sense that a national regulator got to set some terms. None of these five organizations were designed to have opinions about what a foreigner writes on a computer in Virginia. The confusion starts with Ofcom not understanding what a website actually is. A website does not push anything. Content sits on a server. A visitor actively goes to it and requests it. The data crosses borders only because someone on the other end typed in the URL. Website users are called “visitors” and not “viewers” for exactly this reason. They go to the site. The site does not come to them. This is not a complicated distinction. A reasonably bright nine-year-old could grasp it over breakfast. Ofcom, apparently, cannot. The regulator is treating a website in Virginia as though it were a transmitter on a hill in Surrey and claiming jurisdiction over the server rather than the person visiting it. It’s like fining an American for not stopping British citizens from mailing letters to them. Preston Byrne, the American attorney representing the forum, identified the forum as SuSu and called Ofcom’s claim that the site remains accessible to UK users without a VPN “untrue.” “In the investigative documents I have seen, neither Ofcom nor its NGO partners were able to access the site without first using a VPN,” Byrne posted on X after the fine was confirmed. The fine is the largest issued under the Online Safety Act. It is also, by any practical measure, just a piece of paper with a large number written on it. Ofcom might as well have fined the moon. Ofcom sent all 197 of its Section 100 enforcement notices to American companies by email, ignoring official channels and bypassing the UK-US Mutual Legal Assistance Treaty entirely. Byrne says the orders “carry no force in the United States.” Collecting the fine would require cooperation from American courts, where the First Amendment would apply. Wikipedia co-founder Jimmy Wales called the UK’s position “ludicrous and untenable.” Ofcom doubled down on its jurisdictional claim on X, posting that “the fact that the provider is based outside the UK does not mean the forum is outside the scope of the Online Safety Act.” That is Ofcom’s position. By the same logic, any country on earth could fine any website on earth, since someone within its borders might type in the address. Saudi Arabia could fine a British publication focused on same-sex relationships. China could fine the BBC for its Tiananmen coverage. Russia could fine any Western outlet that calls the invasion of Ukraine an invasion. The UK cannot claim this power for itself and then act surprised when states use identical reasoning against British publishers. They don’t even need to develop new legal theories. Ofcom has written one for them, free of charge, and posted it on social media. The timeline of the SaSu case is where Ofcom’s position goes from questionable to genuinely absurd. What happened, according to Byrne, is worse than it first appears. SaSu voluntarily geoblocked the UK in July 2025. Ofcom publicly accepted the geoblock in October 2025. Then, weeks later, following pressure from the Molly Rose Foundation and allied NGOs, Ofcom reversed course and opened an investigation in November 2025. The facts hadn’t changed but the pressure had. In December 2025, according to Byrne, one of Ofcom’s NGO allies, the Mental Health Foundation, used a VPN to bypass the geoblock that Ofcom had approved two months earlier, and created account credentials on the site. This is stated explicitly in paragraph 5.26(c) of Ofcom’s own Provisional Decision. A UK user had never, at any point since the geoblock went live in July, been able to see the registration page without first deliberately circumventing the block. Ofcom communicated its Provisional Decision to SaSu on February 27, 2026. SaSu’s legal team, which Byrne describes as tiny, ingested the 150-odd page document in full over 72 hours. On March 1st, a Sunday night, SaSu modified an anti-spam feature to ensure that even VPN users could not create new UK accounts. On March 2nd, SaSu wrote to Ofcom explaining what it had done and offering to make further changes. Ofcom did not respond. Then, in May 2026, according to Byrne, Ofcom used the Mental Health Foundation’s credentials, the same account that had been created by bypassing the geoblock in December, to log into the site. It used the output of the Mental Health Foundation’s circumvention to get around the additional security measures SaSu had put in place in March, the ones SaSu had told Ofcom about in writing, the ones Ofcom never responded to. Ofcom concealed the identity of the account it used, giving SaSu no opportunity to identify the problem and fix it. Byrne’s letter to Ofcom, which was, under the circumstances, admirably restrained, laid out the sequence. “The evidence supporting the Provisional Decision was gathered not by observing what UK users can access, but by circumventing the very measure Ofcom asked our client to implement and which our client implemented under Ofcom’s supervision.” If Ofcom’s definition of accessible means reachable by someone willing to use circumvention tools, then no website on earth can ever be compliant. Every site is accessible from every country if you use a VPN. That is how VPNs work. The entire point of a VPN is to make you appear to be somewhere you are not. Declaring that a geoblocked site is still accessible because someone used a VPN is like declaring that a locked house is still open because someone could theoretically pick the lock. The selective enforcement makes the whole thing worse. Ofcom closed investigations into five other sites after they geoblocked UK users, describing those cases as no longer an “administrative priority.” SaSu geoblocked the UK months before any of those five and implemented additional measures beyond what any of them did. Same action, different outcome. The only visible difference is that campaign groups wanted SaSu punished specifically and nobody was lobbying about the others. Byrne’s assessment is pretty blunt. He calls SaSu the site “the Online Safety Act was designed to destroy.” His conclusion applies to every American company Ofcom has tried to contact by email. “For SaSu, compliance had the same consequences as refusal.” If compliance and defiance produce identical outcomes, no rational actor will ever comply again. Every future American target now knows that cooperating with Ofcom gets you the same fine as ignoring it, which makes ignoring it the obvious choice. That is the enforcement model Ofcom just built, presumably without thinking it through, which is becoming something of a pattern. The subject matter here is genuinely difficult. Suicide is serious and reasonable people disagree about what kind of speech around it should be permitted to exist. That question deserves an honest debate. But it is a separate question from whether a British regulator has the legal authority or practical power to resolve that debate by fining foreigners for speech that is legal where it was published. You can believe the content is harmful and still recognize that Ofcom has no jurisdiction over it. Those two thoughts can exist in the same head at the same time. What the UK government actually wants is to prevent its own citizens from accessing certain foreign websites. The mechanism for doing that exists. Parliament can order British ISPs to block sites at the network level. That power falls within UK jurisdiction and it would work. Ofcom said Wednesday it is “preparing an application to the court for business disruption measures” if SaSu does not comply within 10 working days, which means ISP-level blocking. That is the honest version of what they’re doing. But admitting it would mean acknowledging what the policy actually is, which is state-level internet censorship of the kind the UK has spent years criticizing China, Russia, Iran, and North Korea for practicing. The Online Safety Act lets the government pretend it is regulating foreign companies rather than censoring its own citizens’ access to the internet. The fiction requires Ofcom to maintain that it has authority over American websites. It doesn’t, and the SaSu case has just demonstrated that to everyone with a functioning internet connection and the ability to read. The mission creep tells the longer story. Ofcom and its predecessors started with broadcast television and radio, expanded into telecoms, then broadband, then video-on-demand, and now claims authority over the entire global internet. The expansion seemed incremental at the time but the cumulative result is an organization that believes it regulates human speech worldwide, operating from an office in London with no enforceable power outside the United Kingdom and, based on recent evidence, no particular understanding of how the internet works. The $1.3 million fine will never be collected. The geoblock is still up. The large platforms with legal teams will ignore Ofcom or fight it. The small forums and independent sites, the ones without corporate money, will see a near-million-dollar fine and consider whether existing is worth the hassle. That is who this enforcement model actually threatens and it is the opposite of what a regulator should be doing. Ofcom has no coherent theory for regulating the global internet and no practical power to do so. What it does have is the ability to generate headlines, which is what Wednesday’s announcement was. A press release dressed up as law enforcement, directed at a website that already did what the regulator asked, fined for the crime of cooperating with Ofcom’s absurdity. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Ofcom and the Fantasy of Global Speech Control appeared first on Reclaim The Net.

London Police Deploy Facial Recognition at Protest for First Time
Favicon 
reclaimthenet.org

London Police Deploy Facial Recognition at Protest for First Time

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Tomorrow, the Metropolitan Police will turn biometric surveillance cameras on people attending a political demonstration in London. Live facial recognition will scan the faces of those heading to the “Unite the Kingdom, Unite the West” rally in the borough of Camden, marking the first time the technology has been authorized for use at a protest in the UK. The rally was organized by activist Tommy Robinson who says the rally is for “national unity, free speech and Christian values.” Drones will fly overhead, scanning for suspects from above. More: “Nothing to Fear” Is Back: The UK High Court Clears Way for Police Facial Recognition Deputy Assistant Commissioner James Harman said Live Facial Recognition (LFR) “will be deployed in the London borough of Camden in an area likely to be used by those attending the Unite the Kingdom event,” but a pro-Palestinian march marking Nakba Day, happening in London on the same day with an estimated 30,000 attendees, will not face the same biometric surveillance. Biometric identification has jumped from high streets to political assembly and, once that barrier falls, the question is never whether it will be used more broadly. It’s when. Reform UK leader Nigel Farage responded to the deployment. “The Unite the Kingdom rally on Saturday should be treated no differently to the pro-Palestinian march on the same day,” Farage said. “The fact that two-tier justice is being applied against patriotic Britons is disgraceful.” The Met justified its decision by citing “intelligence which indicates that there is likely to be a threat to public safety from some who might be in attendance.” It turns an entire protest into a surveillance zone based on the expected behavior of an unspecified portion of attendees. Everyone walking through Camden tomorrow afternoon gets their face compared against a watchlist, whether they’re a suspected criminal or someone who just showed up with a flag. This deployment at a protest doesn’t exist in isolation. Two days before announcing LFR at the rally, the Met published results from a six-month pilot in Croydon that signals where facial recognition in Britain is heading. For the first time, the Met mounted live facial recognition cameras on lampposts and existing street furniture rather than using dedicated police vans. Static cameras, monitored remotely, watched over Croydon’s high street from October 2025 to March 2026. The move from van-based deployments to cameras bolted onto public infrastructure is a big deal. Vans are visible, temporary, and require a physical police presence. Lamppost cameras blend into the built environment and can be activated whenever officers decide they’re needed. The Met’s numbers tell one story. The privacy cost tells another. Over six months, the system scanned more than 470,000 faces. It produced 173 arrests across 24 separate operations. The Met presented this as one arrest every 35 minutes and claimed a 10.5% drop in local crime, including a 21% reduction in violence against women and girls. Lindsey Chiswick, the national and Met lead for live facial recognition, said, “These results show why live facial recognition is such a powerful tool when it’s used carefully, openly and in the right places.” She added, “We will continue using static cameras in Croydon as part of our regular live facial recognition deployments which play a vital part in keeping London safe.” Run those numbers differently and they look less triumphant. Of the 470,000 people whose biometric data was captured and processed, 99.96% had nothing to do with any crime. Approximately 2,717 people had to have their faces scanned and compared against police watchlists for every single arrest. The Met subjected an entire community to rolling biometric surveillance to catch people it could not find through real policing and it now plans to make that arrangement permanent. Parliament has never voted on live facial recognition. No legislation explicitly regulates its use. Police forces write their own policies governing when and how they deploy it and the Met is now expanding from mobile vans to permanent cameras on public infrastructure with no democratic mandate for the change. The technology was introduced, tested, and normalized entirely outside parliamentary oversight. Tomorrow’s deployment in Camden crosses another line. Facial recognition at a protest creates a biometric record of political participation, even if the data is supposedly deleted moments later. People who might attend a lawful demonstration now know their faces will be captured and compared against police databases. Some will stay home. That is surveillance shaping who shows up to exercise democratic rights and the Met has decided it gets to choose which demonstrations trigger that effect. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post London Police Deploy Facial Recognition at Protest for First Time appeared first on Reclaim The Net.

Canada Says Critics Don’t Understand Its Surveillance Bill
Favicon 
reclaimthenet.org

Canada Says Critics Don’t Understand Its Surveillance Bill

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Canada’s Public Safety Minister is telling Apple, Meta, and Signal that they don’t understand his own surveillance bill. They understand it fine. That’s the problem. Bill C-22, the Lawful Access Act, would force telecoms, internet companies, and social media platforms to rebuild their systems so police and the Canadian Security Intelligence Service (CSIS) can access user data more easily during investigations. It would also require providers to stockpile metadata on every subscriber for up to a year, regardless of whether those people are suspected of anything. The bill has the backing of police chiefs across the country and CSIS, who have long argued they are stymied by outdated legislation in a digital world. The government describes this as organizing information “like a filing cabinet, where certain types of information would be available with legal authorization.” That filing cabinet contains a year’s worth of data showing where every Canadian goes, when they go there, and who they communicate with. On a mobile network, that metadata includes which cell towers each phone connects to and when. Retained at scale, it amounts to a comprehensive surveillance map of the population. Public Safety Minister Gary Anandasangaree said at a press conference Wednesday that tech companies are “using this as an opportunity to double down.” He added that “Tech giants are misinterpreting some of the safeguards that are already built in, including on ensuring that encryption is not in any way interrupted as part of Bill-22.” The list of people who supposedly can’t read keeps growing. Apple warned that the legislation “could allow the Canadian government to force companies to break encryption by inserting back doors into their products — something Apple will never do.” The company added that “at a time of rising and pervasive threats from malicious actors seeking access to user information, Bill C-22, as drafted, would undermine our ability to offer the powerful privacy and security features users expect from Apple.” Apple has already shown it will follow through on threats like these. It pulled its Advanced Data Protection feature from the United Kingdom rather than comply with a Technical Capability Notice ordering it to create access to encrypted iCloud data and is now litigating the order before the Investigatory Powers Tribunal. If Bill C-22 passes unchanged, Canadians could lose the same protections. Signal, the encrypted messaging service, went further. Vice-president Udbhav Tiwari told the Globe and Mail that Signal “would rather pull out of the country than be compelled to compromise on the privacy promises we have made to our users.” Tiwari added in a statement that “end-to-end encryption is incompatible with exceptional access, no matter how creative the route taken to achieve it,” and called provisions that force vulnerabilities into communications systems “a grave threat to privacy everywhere.” Meta’s head of Canadian public policy, Rachel Curran, told a Commons committee that the bill’s technical assistance obligations “could conscript private companies into service as an arm of the government’s surveillance apparatus.” She told MPs that “it is not possible to build back doors to encrypted systems for law enforcement without creating vulnerabilities that will be exploited by malicious actors,” and warned that “weakening encryption affects not only the target of an investigation but all Canadians who rely on secure communications for banking, accessing healthcare, running their businesses, or simply communicating with loved ones.” In 2024, the Salt Typhoon hack exploited a system built by internet service providers specifically to give law enforcement access to user data. The very type of backdoor infrastructure Bill C-22 would mandate became an entry point for one of the most significant foreign intelligence breaches in recent memory. When you build surveillance doors into communications systems, you don’t get to choose who walks through them. The bill does include language saying providers aren’t required to comply if doing so would introduce a “systemic vulnerability.” But the definition of that term is unclear and essential terms like “encryption” are left to be defined later through regulation, while ministerial orders can override those same regulations. The bill would also allow the federal government to secretly order companies to weaken encryption or create backdoors and Meta’s Robyn Greene told the committee that if the government quietly mandates changes to a platform’s security architecture, Meta would be legally prohibited from telling its own users. Secret orders to weaken security, with a gag clause preventing disclosure. The government calls this “encryption-neutral.” If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Canada Says Critics Don’t Understand Its Surveillance Bill appeared first on Reclaim The Net.

The Web Is Splitting Into Approved and Unapproved Humans
Favicon 
reclaimthenet.org

The Web Is Splitting Into Approved and Unapproved Humans

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post The Web Is Splitting Into Approved and Unapproved Humans appeared first on Reclaim The Net.

Days Away: The TAKE IT DOWN Act Creates a Censorship Mechanism With No Safeguards
Favicon 
reclaimthenet.org

Days Away: The TAKE IT DOWN Act Creates a Censorship Mechanism With No Safeguards

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The Federal Trade Commission sent letters to 17 major tech companies this week, warning them to comply with the Take It Down Act by May 19 or face fines of $53,088 per violation. Amazon, Alphabet, Apple, Meta, Microsoft, TikTok, X, Reddit, Discord, Snapchat, Pinterest, Bumble, Match Group, Automattic, and SmugMug all got the same message from Chairman Andrew Ferguson. We obtained a copy of the letter for you here. “We stand ready to monitor compliance, investigate violations, and enforce the Take It Down Act,” Ferguson wrote. “Protecting the vulnerable, especially children, from this harmful abuse is a top priority for this agency and this administration.” The law, signed by President Trump in May 2025 with strong backing from First Lady Melania Trump, requires platforms to delete non-consensual intimate imagery (NCII), including AI-generated deepfakes, within 48 hours of receiving a removal request. Platforms must also find and remove identical copies, provide clear notice about the removal process and let people track their requests. The FTC published a business guidance page alongside the letter spelling all of this out. The definition of “covered platform” is broad enough to capture social media, messaging apps, video sharing, gaming platforms, and essentially any site hosting user-generated content. Nobody wants revenge porn circulating online. But the law Congress passed is far broader than the problem it claims to solve. The TAKE IT DOWN Act borrows its structure from the DMCA’s already-controversial notice-and-takedown system, then strips out the safeguards. Under the DMCA, a takedown request must include a statement under penalty of perjury. False claims can result in liability. There’s a counter-notice process so the person whose content was deleted can push back. TIDA has none of this. There’s no penalty for false claims, no counter-notice, no requirement that the filer prove anything before content disappears. A platform gets a complaint, has 48 hours, and deletes. That’s the entire process and exactly why the Take it Down Act introduces a new censorship mechanism. The law defines a violation as involving an “identifiable individual” engaged in “sexually explicit conduct,” without defining that conduct narrowly. More: The Take It Down Act: A Censorship Weapon Disguised as Protection Political speech is vulnerable too. A deepfake of then-candidate Trump kissing Elon Musk’s feet went viral before TIDA took effect. There was no nudity or explicit content but under the TIDA’s language, that satire could be classified as NCII and deleted. A meme recasting Vice President Kamala Harris and Governor Tim Walz as characters from Dumb and Dumber was already pulled from Meta for being sexual in nature. Anyone with a form and a grievance can file a request and platforms facing five-figure fines per violation will delete first. The law also applies to messaging platforms, some of which offer end-to-end encryption. If a platform can’t see message contents, it can’t scan for NCII or find “known identical copies.” Complying with the law as written means breaking encryption or scanning content before it gets encrypted. The FTC’s letter doesn’t address this and the law doesn’t carve out encrypted communications. Enforcement sits entirely with the FTC. The law passed the House 409 to 2 and the Senate unanimously. Nobody voted against protecting victims of revenge porn because that’s how the bill was sold. What Congress built is a takedown system with no safeguards against abuse, enforced by a politicized agency, applicable to encrypted communications, and designed to make platforms censor first and think later. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Days Away: The TAKE IT DOWN Act Creates a Censorship Mechanism With No Safeguards appeared first on Reclaim The Net.