Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Substack Introduces ID Checks to Comply with UK Censorship Law
Favicon 
reclaimthenet.org

Substack Introduces ID Checks to Comply with UK Censorship Law

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. By now, you’ve probably realized the internet is being slowly fitted into a digital checkpoint. Everything is being scrubbed down, sanitized, and locked behind a digital turnstile with a flashing sign that says: Show us your ID. Substack, that cozy digital home where newsletter authors rant, muse, and argue about everything from politics to fan fiction of 19th-century philosophers, is the latest to be roped into the bureaucratic puppet show known as the UK’s Online Safety Act. And the British government has decided that if you’re reading a mildly spicy newsletter, you must first present identification. No, really. To access some of the platform’s content, you may soon have to upload a selfie and a government-issued ID. What this means for readers in the UK is simple: prepare to be interrupted. You’re sitting down to read your favorite newsletter. Maybe it’s political commentary, maybe it’s a writer who occasionally uses words like “orgasmic” while referring to cake. Either way, you click. And boom. Content blurred, comment section blocked, and your feed now behind a velvet rope manned by an algorithm with a clipboard. Here is the full list of types of content impacted: Sexually explicit or pornographic content Content that encourages, promotes, or instructs on: a. Suicide or self-harm b. Eating disorders or disordered eating behaviours c. Dangerous physical challenges d. Misuse of harmful substances (ingesting, inhaling, injecting, or otherwise self-administering) Bullying or harassment Hate content targeting people based on: a. Race b. Religion c. Sex d. Sexual orientation e. Disability f. Gender reassignment Violent or graphic material that: a. Promotes or instructs on serious violence, or b. Depicts real or realistic acts of serious violence or injury to people, animals, or fictional creatures. c. Polarized recounting of mass casualty events Substack says this all comes down to the UK’s new rules. Anything that might be “explicit” or “potentially harmful” will now require you to confirm your age. Some lucky folks will have this pulled from their payment info, but the rest? Well, say cheese. You now need to flash ID not just to read some articles, but to comment on them. That’s right. Clicking “Reply” in a Notes thread could soon be treated with the same suspicion as trying to buy a bottle of tequila. Substack, to its credit, isn’t thrilled about this. “These laws are not necessarily effective at achieving their stated aims,” the company said recently, which is a diplomatic way of saying, “This is pointless, dangerous, and won’t even work.” They’re playing the compliance game while muttering under their breath. What’s fascinating here is that Substack isn’t some shadowy porn grotto or terrorist cesspool. It’s a publishing platform. Most of the time, the most harmful thing you’ll find is a writer stretching a metaphor too far. But because anyone can write about anything, and because the UK’s new laws are written with the precision of a toddler describing a dream, everything is now suspect. And the price of that suspicion is privacy. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Substack Introduces ID Checks to Comply with UK Censorship Law appeared first on Reclaim The Net.

UK Ofcom Pushes Rules Targeting “Misogynistic” Content, Prompting (Even More) Free Speech Concerns
Favicon 
reclaimthenet.org

UK Ofcom Pushes Rules Targeting “Misogynistic” Content, Prompting (Even More) Free Speech Concerns

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Britain’s communications regulator, Ofcom, has unveiled a new framework urging social media and technology companies to censor so-called “misogynistic” content as part of its A Safer Life Online for Women and Girls campaign. The initiative, framed as an effort to protect women from online abuse, further weakens the distinction between “harmful” conduct and lawful expression, a tension Ofcom itself acknowledges in its own documentation. The regulator’s new guidance encourages platforms to adopt a wide range of “safety” measures, many of which would directly influence what users can post, see, and share. These include inserting prompts that nudge users to “reconsider” certain comments, suppressing “misogynistic” material in recommendation feeds and search results, temporarily suspending users who post repeated “abuse,” and de-monetizing content flagged under this category. Moderators would also receive special training on “gender-based harms,” while posting rates could be throttled to slow the spread of unwanted speech. Ofcom’s document also endorses the use of automated scanning systems like “hash-matching” to locate and delete non-consensual intimate imagery. While intended to prevent the circulation of explicit photos, such systems typically involve the mass analysis of user uploads and can wrongly flag legitimate material. Additional proposals include “trusted flagger” partnerships with NGOs, identity verification options, and algorithmic “friction” mechanisms, small design barriers meant to deter impulsive posting. Some of the ideas, such as warning prompts and educational links, are voluntary. Yet several major advocacy groups, including Refuge and Internet Matters, are pressing for the government to make them binding on all platforms. If adopted wholesale, these measures would effectively place Ofcom in a position to oversee the policing of legal speech, with tech firms acting as its enforcement arm. In a letter announcing the guidance, Ofcom’s Chief Executive Melanie Dawes declared that “the digital world is not serving women and girls the way it should,” describing online misogyny and non-consensual deepfakes as pervasive problems that justify immediate “industry-wide action.” She stated that Ofcom would “follow up to understand how you are applying this Guidance” and publish a progress report in 2027. Notably, Ofcom’s own statement concedes that the new measures reach into the realm of non-criminal content and may interfere with users’ “freedom of expression and privacy rights.” This admission confirms what free speech advocates have long warned: that the push for “online safety” risks converting private companies into instruments of state censorship. The strategy depends on automated moderation tools and subjective definitions of “harm.” These mechanisms, once in place, rarely stay confined to their original purpose. They create a technical and bureaucratic infrastructure capable of filtering lawful opinions, narrowing public debate under the banner of safety, and quietly redefining what may be said online in the United Kingdom. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Ofcom Pushes Rules Targeting “Misogynistic” Content, Prompting (Even More) Free Speech Concerns appeared first on Reclaim The Net.

EU Parliament Votes for Mandatory Digital ID and Age Verification, Threatening Online Privacy
Favicon 
reclaimthenet.org

EU Parliament Votes for Mandatory Digital ID and Age Verification, Threatening Online Privacy

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The European Parliament has voted to push the European Union closer to a mandatory digital identification system for online activity, approving a non-binding resolution that endorses EU-wide age verification rules for social media, video platforms, and AI chatbots. Though presented as a child protection measure, the text strongly promotes the infrastructure for universal digital ID, including the planned EU Digital Identity Wallet and an age verification app being developed by the European Commission. Under the proposal, every user would have to re-identify themselves at least once every three months to continue using major platforms. Children under 13 would be banned entirely, and teenagers between 13 and 16 would require parental approval to participate online. More: The Digital ID and Online Age Verification Agenda The Parliament also called for prohibitions on design features it describes as addictive or manipulative, such as gambling-like rewards, engagement-based algorithms, and paid promotions by minors. Companies that fail to comply could be barred from operating within the EU. The motion, which passed with 483 votes in favor, 92 against, and 86 abstentions, goes further by recommending that executives be held personally accountable for failures to meet digital compliance standards. It also urges immediate action to tackle “deepfakes, companionship chatbots, AI agents and AI-powered nudity apps (that create non-consensual manipulated images).” Supporters of the plan link it to growing anxiety about children’s exposure to social media, yet the system it envisions would amount to a vast digital checkpoint network where every online interaction could be tied back to a verified identity. By binding access to identification, the EU’s digital wallet and age verification tools would dismantle the anonymity that once defined the internet’s open structure. Users would need to prove who they are not just once, but continuously, through a state-sanctioned mechanism that records and authenticates their presence online. During the parliamentary debate, Danish lawmaker Christel Schaldemose, who led the proposal, described the current internet environment as an uncontrolled experiment. “We are in the middle of an experiment, an experiment where American and Chinese tech giants have unlimited access to the attention of our children and young people for hours every single day almost entirely without oversight,” she told Parliament, naming Elon Musk, Mark Zuckerberg, and “China’s Communist Party and their tech proxies at TikTok” as participants in that experiment. She added, “With this report, we finally draw a line. We are saying clearly to the platforms, ‘Your services are not designed for children, and the experiment ends here.’” The language of protection is persuasive, but the underlying mechanism represents a profound change in the structure of online life. This normalizes constant identity checks for everyone, gradually eliminating private browsing and anonymous participation. Once linked to digital IDs, a person’s online activity could become inseparable from their legal identity, building a system where speech and access are conditional on verified status. Framed as a safety initiative, this evolution risks eroding two of the internet’s founding principles: privacy and free expression. Age verification tied to digital ID would make it nearly impossible to speak, explore, or organize online without leaving a permanent trace. The proposal may mark the start of an internet where every login becomes a checkpoint, every user a data point, and privacy a privilege instead of a right. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post EU Parliament Votes for Mandatory Digital ID and Age Verification, Threatening Online Privacy appeared first on Reclaim The Net.

EU Council Approves New “Chat Control” Mandate Pushing Mass Surveillance
Favicon 
reclaimthenet.org

EU Council Approves New “Chat Control” Mandate Pushing Mass Surveillance

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. European governments have taken another step toward reviving the EU’s controversial Chat Control agenda, approving a new negotiating mandate for the Child Sexual Abuse Regulation in a closed session of the Council of the European Union on November 26. The measure, presented as a tool for child protection, is once again drawing heavy criticism for its surveillance implications and the way it reshapes private digital communication in Europe. Unlike earlier drafts, this version drops the explicit obligation for companies to scan all private messages but quietly introduces what opponents describe as an indirect system of pressure. It rewards or penalizes online services depending on whether they agree to carry out “voluntary” scanning, effectively making intrusive monitoring a business expectation rather than a legal requirement. Former MEP Patrick Breyer, a long-standing defender of digital freedom and one of the most vocal opponents of the plan, said the deal “paves the way for a permanent infrastructure of mass surveillance.” According to him, the Council’s text replaces legal compulsion with financial and regulatory incentives that push major US technology firms toward indiscriminate scanning. He warned that the framework also brings “anonymity-breaking age checks” that will turn ordinary online use into an exercise in identity verification. The new proposal, brokered largely through Danish mediation, comes months after the original “Chat Control 1.0” regulation appeared to have been shelved following widespread backlash. It reinstates many of the same principles, requiring providers to assess their potential “risk” for child abuse content and to apply “mitigation measures” approved by authorities. In practice, that could mean pressure to install scanning tools that probe both encrypted and unencrypted communications. Czech MEP Markéta Gregorová called the Council’s position “a disappointment…Chat Control…opens the way to blanket scanning of our messages.” Similar objections emerged across Europe. In the Netherlands, members of parliament forced their government to vote against the plan, warning that it combines “mandatory age verification” with a “voluntary obligation” scheme that could penalize any company refusing to adopt invasive surveillance methods. Poland and the Czech Republic also voted against, and Italy abstained. Former Dutch MEP Rob Roos accused Brussels of operating “behind closed doors,” warning that “Europe risks sliding into digital authoritarianism.” Beyond parliamentarians, independent voices such as Daniel Vávra, David Heinemeier Hansson, and privacy-focused company Mullvad have spoken out against the Council’s position, calling it a direct threat to private communication online. Despite the removal of the word “mandatory,” the structure of the new deal appears to preserve mass scanning in practice. Breyer described it as a “Trojan Horse,” arguing that by calling the process “voluntary,” EU governments have shifted the burden of surveillance to tech companies themselves. The Council’s mandate introduces three central dangers that remain largely unacknowledged in the public debate. First, so-called “voluntary scanning” turns mass surveillance into standard operating procedure. The proposal extends the earlier temporary regulation that allowed service providers to scan user messages and images without warrants. Authorities like Germany’s Federal Criminal Police Office have reported that roughly half the alerts from such systems are baseless, often involving completely legal content flagged by flawed algorithms. Breyer said these systems leak “tens of thousands of completely legal, private chats” to law enforcement every year. Second, the plan effectively erases anonymous communication. To meet the new requirement to “reliably identify minors,” providers will have to implement universal age checks. This likely means ID verification or face scans before accessing even basic services such as email or messaging apps. For journalists, activists, and anyone who depends on anonymity for protection, this system could make easy private speech functionally impossible. Technical experts have repeatedly warned that age estimation “cannot be performed in a privacy-preserving way” and carries “a disproportionate risk of serious privacy violation and discrimination.” Third, it risks digitally isolating young people. Under the Council’s framework, users under 17 could be blocked from many platforms unless they pass strict identity verification, including chat-enabled games and messaging services. Breyer called this idea “pedagogical nonsense,” arguing that it excludes teenagers instead of helping them develop safe online habits. Member states remain divided: the Netherlands, Poland, and the Czech Republic rejected the text, while Italy abstained. Negotiations between the European Parliament and the Council are expected to begin soon, aiming for a final version before April 2026. Breyer warned that the apparent compromise is no real retreat from surveillance. “The headlines are misleading: Chat Control is not dead, it is just being privatized,” he said. “We are facing a future where you need an ID card to send a message, and where foreign black-box AI decides if your private photos are suspicious. This is not a victory for privacy; it is a disaster waiting to happen.” If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post EU Council Approves New “Chat Control” Mandate Pushing Mass Surveillance appeared first on Reclaim The Net.

Chat Control 2.0: EU Moves Toward Ending Private Communication
Favicon 
reclaimthenet.org

Chat Control 2.0: EU Moves Toward Ending Private Communication

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Between the coffee breaks and the diplomatic niceties of Brussels bureaucracy, a quiet dystopian revolution might be taking place. On November 26, a roomful of unelected officials could nod through one of the most consequential surveillance laws in modern European history, without ever having to face the public. The plan, politely titled EU Moves to End Private Messaging with Chat Control 2.0, sits on the agenda of the Committee of Permanent Representatives, or Coreper, a club of national ambassadors whose job is to prepare legislation for the European Council. This Wednesday, they may “prepare” it straight into existence. According to MEP Martin Sonneborn, Coreper’s diplomats could be ready to endorse the European Commission’s digital surveillance project in secret. It was already due for approval a week earlier before mysteriously vanishing from the schedule. Now it’s back, with privacy advocates watching like hawks who suspect the farmer’s got a shotgun. The Commission calls Chat Control 2.0 a child-protection measure. The branding suggests moral urgency; the text suggests mass surveillance. The proposal would let governments compel messaging services such as WhatsApp or Signal to scan users’ messages before they’re sent. Officials insist that the newest version removes mandatory scanning, which is a bit like saying a loaded gun is safer because you haven’t pulled the trigger yet. A President’s Pet Project Commission President Ursula von der Leyen has treated this initiative as a centerpiece of her digital policy, though “digital policy” increasingly looks like a euphemism for “monitoring architecture.” Her plan would effectively turn private chat services into data-mining contractors for the state. Sonneborn put it bluntly: the EU is on the verge of creating “a dedicated spying authority.” It’s hard to call that an exaggeration. The European Commission, which has spent years refusing to release von der Leyen’s text messages about vaccine contracts, now wants the legal authority to inspect yours. Transparency, it seems, is a privilege reserved for the ruled. Chat Control does not exist in a vacuum. It fits neatly beside the Digital Services Act, the digital identity proposals, and other innovations that make online behavior traceable from cradle to grave. Together they form a system where anonymity is a bug, not a feature. The justification is the same every time: children. The issue is real; the logic is not. Dismantling encryption to catch predators is like removing everyone’s front doors to stop burglary. The real failures in child protection, including investigative neglect, resource shortages, and bureaucratic inertia, are too boring to legislate around, so instead, Europe aims to fix the problem by surveilling the population. Germany once drew a line. Justice Minister Stefanie Hubig called scanning innocent chats “an absolute taboo in a constitutional state.” That was then. Sonneborn now warns that Berlin may be preparing to let the taboo quietly dissolve, a move that would clear the way for the rest of the Council to fall in line. Once Germany folds, the rest of Europe will follow the usual pattern: moral speeches, procedural approval, and a quick December adoption while everyone’s distracted by holiday markets. If Coreper signs off this Wednesday, the EU will have taken the first formal step toward making private digital communication a thing of the past. No open debate, no meaningful oversight, just a discreet, administrative erasure of privacy. Future historians will not find a dramatic announcement for the day Europe normalized message scanning. They’ll find a line in a meeting summary, approved “without discussion.” And that, in Brussels, is what democracy now looks like. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Chat Control 2.0: EU Moves Toward Ending Private Communication appeared first on Reclaim The Net.