Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

UK Government “Resist” Program Monitors Citizens’ Online Posts
Favicon 
reclaimthenet.org

UK Government “Resist” Program Monitors Citizens’ Online Posts

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Let’s begin with a simple question. What do you get when you cross a bloated PR department with a clipboard-wielding surveillance unit? The answer, apparently, is the British Government Communications Service (GCS). Once a benign squad of slogan-crafting, policy-promoting clipboard enthusiasts, they’ve now evolved (or perhaps mutated) into what can only be described as a cross between MI5 and a neighborhood Reddit moderator with delusions of grandeur. Yes, your friendly local bureaucrat is now scrolling through Facebook groups, lurking in comment sections, and watching your aunt’s status update about the “new hotel down the road filling up with strangers” like it’s a scene from Homeland. All in the name of “societal cohesion,” of course. Once upon a time, the GCS churned out posters with perky slogans like Stay Alert or Get Boosted Now, like a government-powered BuzzFeed. But now, under the updated “Resist” framework (yes, it’s actually called that), the GCS has been reprogrammed to patrol the internet for what they’re calling “high-risk narratives.” Not terrorism. Not hacking. No, according to The Telegraph, the new public enemy is your neighbor questioning things like whether the council’s sudden housing development has anything to do with the 200 migrants housed in the local hotel. It’s all in the manual: if your neighbor posts that “certain communities are getting priority housing while local families wait years,” this, apparently, is a red flag. An ideological IED. The sort of thing that could “deepen community divisions” and “create new tensions.” This isn’t surveillance, we’re told. It’s “risk assessment.” Just a casual read-through of what that lady from your yoga class posted about a planning application. The framework warns of “local parental associations” and “concerned citizens” forming forums. And why the sudden urgency? The new guidance came hot on the heels of a real incident, protests outside hotels housing asylum seekers, following the sexual assault of a 14-year-old girl by Hadush Kebatu, an Ethiopian migrant. Now, instead of looking at how that tragedy happened or what policies allowed it, the government’s solution is to scan the reaction to it. What we are witnessing is the rhetorical equivalent of chucking all dissent into a bin labelled “disinformation” and slamming the lid shut. The original Resist framework was cooked up in 2019 as a European-funded toolkit to fight actual lies. Now, it equates perfectly rational community concerns about planning, safety, and who gets housed where with Russian bots and deepfakes. If you squint hard enough, everyone starts to look like a threat. Local councils have even been drafted into the charade. New guidance urges them to follow online chatter about asylum seekers in hotels or the sudden closure of local businesses. One case study even panics over a town hall meeting where residents clapped. That’s right. Four hundred people clapped in support of someone they hadn’t properly Googled first. This, we’re told, is dangerous. So now councils are setting up “cohesion forums” and “prebunking” schemes to manage public anger. Prebunking. Like bunking, but done in advance, before you’ve even heard the thing you’re not meant to believe. It’s the equivalent of a teacher telling you not to laugh before the joke’s even landed. Naturally, this is all being wrapped in the cosy language of protecting democracy. A government spokesman insisted, with a straight face: “We are committed to protecting people online while upholding freedom of expression.” Because let’s be real, this isn’t about illegal content or safeguarding children. It’s about managing perception. When you start labeling ordinary gripes and suspicions as “narratives” that need “countering,” what you’re really saying is: we don’t trust the public to think for themselves. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Government “Resist” Program Monitors Citizens’ Online Posts appeared first on Reclaim The Net.

EU Digital Omnibus Promises Fewer Cookie Banners but Expands Digital ID and Loosens Privacy Rules
Favicon 
reclaimthenet.org

EU Digital Omnibus Promises Fewer Cookie Banners but Expands Digital ID and Loosens Privacy Rules

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. After years of frustration with those ceaseless cookie pop-ups, Brussels has finally proposed a fix. Hidden deep within the European Commission’s sweeping new “Digital Omnibus” proposal is Article 88b, a technical update that could make those consent banners a relic of the past. Browsers or devices would automatically communicate whether users agree to tracking, removing the need to click “accept” on every site visit. The banners have been annoying and it sounds like a victory for usability. But the rest of the Commission’s package tells a more complicated story, one where privacy safeguards built over the past decade are at risk of being quietly rewritten. The Digital Omnibus is not a single law but a dense collection of updates that cut across the European Union’s flagship privacy and AI frameworks. It proposes amendments to the General Data Protection Regulation (GDPR), the ePrivacy Directive, and the EU AI Act. Together, these documents have long been regarded as the legal backbone of Europe’s data protection regime. The Commission frames the Omnibus as modernization. Officials argue that Europe needs a more “innovation-friendly” regulatory environment to compete with the United States and China in AI and digital services. Alongside these reforms comes a European Business Wallet, a kind of digital ID for companies designed to streamline online verification and compliance. The Commission also introduced a European Data Union Strategy, intended to expand access to data for AI training and analytics. It describes this as a way to “scale up the European data economy.” There’s no doubt that Europe is falling behind the US and China on AI innovation, but, looking closely though, the language could easily be seen as a prelude to mass data pooling with insufficient privacy safeguards. Civil society groups responded with alarm. European Digital Rights (EDRi), a long-standing advocate for strong privacy protections, warned that the Omnibus would “allow data processing without consent for AI development and operation” by altering GDPR Article 9(2) and introducing a new Article 88c. That change would let companies process even “special categories” of data, information revealing health status, religion, political opinion, or sexuality, if they claim a “legitimate interest.” Under the current GDPR, such data is tightly protected and can only be processed under very specific conditions. EDRi also flagged a series of other rollbacks: The removal of explicit consent for accessing data on devices, such as cookies and app usage, effectively folding ePrivacy protections into looser GDPR provisions. A narrowing of data breach reporting requirements to only “high-risk” cases, giving companies more time to report. Optional transparency for companies, as the proposed text amends Articles 12, 13, and 15 to allow firms to withhold certain information about how they process user data. A weaker definition of personal data itself, softening the foundation of the GDPR’s protections. noyb, the privacy group founded by activist Max Schrems, was more blunt. The proposed reforms, it said, would “massively lower protections for Europeans” and reduce user rights “to almost zero.” The organization argued that the changes “create loopholes that only benefit US Big Tech companies and don’t provide any benefit for small and medium-sized European businesses.” The European Business Wallet is a core part of the Commission’s digital identity agenda. It promises efficiency, a single, verifiable credential system for cross-border business operations. But for many privacy experts, it also signals the rise of a new layer of digital surveillance. Digital IDs increase the risk of tracking and blacklisting. If every transaction and authentication flows through a centralized identity layer, governments and corporations could gain new insight into corporate behavior and, indirectly, into the individuals behind these firms. The Commission insists the system will include “robust privacy safeguards,” yet details remain scarce. The European Data Union Strategy has received less scrutiny so far, though it may ultimately be the most consequential piece. It aims to scale up access to European datasets for AI research, public services, and industry collaboration. If privacy rules are relaxed in favor of “legitimate interests,” the strategy could effectively grant governments and corporations a freer hand to repurpose citizen data for algorithmic training. That vision might appeal to policymakers chasing competitiveness, but could erode the very rights that set European data governance apart. The Commission’s proposals are still in draft form and will be debated by the European Parliament and Council before adoption. Implementation of the Business Wallet is expected over the next two years, and the automatic consent mechanism for cookies will depend on the development of new technical standards. That automatic consent feature may well end one of the web’s great user experience blunders. Yet the broader trade-off is harder to ignore. If the price of fewer pop-ups is a system that weakens consent, dilutes transparency, and broadens state and corporate access to data, Europe could find itself undoing much of what made the GDPR a global benchmark. The Digital Omnibus offers a glimpse into a future where convenience and innovation take precedence over privacy. Whether that future aligns with Europe’s values will depend on how fiercely lawmakers and citizens are willing to defend them. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post EU Digital Omnibus Promises Fewer Cookie Banners but Expands Digital ID and Loosens Privacy Rules appeared first on Reclaim The Net.

Europe Calls Its Censorship Law “Neutral” but Creators and Diplomats See a Clear Grip on Online Speech
Favicon 
reclaimthenet.org

Europe Calls Its Censorship Law “Neutral” but Creators and Diplomats See a Clear Grip on Online Speech

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The European Commission’s self-assessment of the Digital Services Act (DSA) has been met with mounting alarm from across the Atlantic and within Europe’s own civil society, as questions deepen about whether the EU is entrenching a system of state-managed online speech under the banner of “safety.” The report, published this week, repeated the Commission’s long-standing claim that the DSA is “content agnostic” and aligned with the EU Charter of Fundamental Rights. We obtained a copy of the report for you here. Yet the review sidestepped the central issue raised repeatedly by lawmakers, academics, and digital rights advocates: that the DSA incentivizes platforms to over-remove lawful material out of fear of heavy sanctions reaching up to six percent of global turnover. The Commission’s review offered no new legal analysis of the DSA’s compatibility with EU and international free expression protections. Instead, it proposed expanding enforcement cooperation and hinted at the creation of an EU-wide “one stop shop” system for content regulation, a move that would further concentrate authority over speech standards in Brussels. Those studying the law’s global reach point to a growing disconnect between the EU’s rhetoric and the experiences of online creators, journalists, and even governments affected by its cross-border influence. US officials have repeatedly warned that the DSA’s vague categories of “illegal” or “harmful” content risk suppressing legitimate political or religious viewpoints. US Ambassador to the EU Andrew Puzder said Washington would be making formal submissions, noting that “no President of either party…is going to tolerate a foreign government restricting the First Amendment fundamental free speech, free expression rights of American citizens.” Secretary of State Marco Rubio earlier directed US diplomats to actively challenge the DSA in European capitals. Major technology firms such as X and Google have raised similar concerns, describing the law as a framework that could export European-style content controls worldwide. Last month, 113 figures from journalism, academia, and law, including a former US senator and the former vice president of Yahoo Europe, signed an open letter urging the European Commission to reexamine the DSA. The letter warned that the act “constructs a pan-European censorship infrastructure with loosely defined boundaries and the potential to suppress legitimate democratic discourse,” and called for transparency about which groups helped shape the law and how they were chosen. In its written response, the Commission argued that it “does not regulate specific speech as it is content agnostic.” That defense has drawn skepticism from free expression advocates who note that enforcement actions already hinge on political interpretations of which categories of speech constitute “systemic risks.” The DSA’s defenders portray it as a procedural framework aimed at accountability and transparency, but in reality, it embeds a continuous compliance relationship between digital platforms and EU regulators, one that critics fear could normalize government involvement in moderating public discourse. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Europe Calls Its Censorship Law “Neutral” but Creators and Diplomats See a Clear Grip on Online Speech appeared first on Reclaim The Net.

The Algorithm Accountability Act’s Threat to Free Speech
Favicon 
reclaimthenet.org

The Algorithm Accountability Act’s Threat to Free Speech

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A new push in Congress is taking shape under the banner of “algorithmic accountability,” but its real effect would be to expand the government’s reach into online speech. Senators John Curtis (R-UT) and Mark Kelly (D-AZ) have introduced the Algorithm Accountability Act, a bill that would rewrite Section 230 of the Communications Decency Act to remove liability protections from large, for-profit social media platforms whose recommendation systems are said to cause “harm.” We obtained a copy of the bill for you here. The proposal applies to any platform with more than a million users that relies on algorithms to sort or recommend content. These companies would be required to meet a “duty of care” to prevent foreseeable bodily injury or death. If a user or family member claims an algorithm contributed to such harm, the platform could be sued, losing the legal shield that has protected online speech for nearly three decades. Although the bill’s authors describe it as a safety measure, the structure of the law would inevitably pressure platforms to suppress or downrank lawful content that might later be portrayed as dangerous. Most major social networks already rely heavily on automated recommendation systems to organize and personalize information. Exposing them to lawsuits for what those systems display invites broad, quiet censorship under the guise of caution. The legislation carves out narrow exemptions for feeds shown in chronological order or for content users search for directly. It also claims to bar enforcement based on political viewpoint. However, few companies could risk leaving controversial or politically charged material in recommendation streams once legal liability becomes a possibility. The easiest path would be to remove or hide it preemptively. Section 230 was designed in 1996 to protect platforms from being held responsible for what others say online. That protection made it possible for open forums, social networks, and comment sections to exist at all. Weakening it for algorithmic recommendations would expose the entire system of user-driven communication to endless litigation. Senator Curtis has linked his support for the bill to the killing of conservative activist Charlie Kirk in Utah, asserting that “online platforms likely played a major role in radicalizing Kirk’s alleged killer,” and calling the process “driven not by ideology alone but also by algorithms, code written to keep us engaged and enraged.” He and Senator Kelly introduced the proposal during a CNN town hall, portraying it as a bipartisan effort to reduce political division. *** California already considered this approach when lawmakers advanced SB 771, a measure that tried to impose liability for algorithmic distribution of user speech. After weeks of public pressure and growing concern over how the bill would function in practice, Governor Gavin Newsom ultimately vetoed it. His decision signaled that even within a state that frequently experiments with heavy regulatory models, there was recognition that attaching legal penalties to algorithmic delivery of third-party content risked crossing constitutional lines. In his veto message, Newsom raised concerns about the bill’s reach and the practical consequences of transforming routine algorithmic functions into potential civil rights violations. Treating automated recommendations as legally distinct actions from the underlying speech would have opened the door to extensive litigation over content that remains fully lawful. Newsom’s rejection showed that even government officials who express strong interest in regulating technology can recognize when a proposal edges too close to policing expression under another name. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post The Algorithm Accountability Act’s Threat to Free Speech appeared first on Reclaim The Net.

Lindsey Graham Falls Prey to the Surveillance Monster He Championed
Favicon 
reclaimthenet.org

Lindsey Graham Falls Prey to the Surveillance Monster He Championed

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Some people find religion after a brush with mortality. Lindsey Graham found the Fourth Amendment after a brush with Jack Smith. The senator from South Carolina has spent the past two decades helping build the modern surveillance state, and now he’s furious that it turned its cold electronic eye on him. Federal prosecutors secretly subpoenaed his phone records without his knowledge as part of Special Counsel Smith’s investigation into President Donald Trump’s alleged role in the events of January 6. Graham says it’s an outrage, a scandal. He’s demanding the impeachment of the federal judge who approved it and threatening to sue someone, though he hasn’t worked out who, for “tens of millions of dollars.” It’s the kind of melodrama that comes easily to a man who’s never been shy about using the power of the state when it suits him. This story started last month when FBI Director Kash Patel revealed that phone records of eight Republican senators, including Graham’s, were pulled as part of Smith’s “Arctic Frost” probe. The data covered January 4 to 7, 2021, and came with gag orders preventing telecom companies from telling the targets they were under the microscope. “They spied on my phone records as a senator and a private citizen,” Graham complained on Fox News. “I’m sick of it.” He’s not wrong to be angry. But there’s something deeply comic about Graham discovering his inner civil libertarian only after the dragnet landed on his number. Graham has been one of the most reliable defenders of the surveillance architecture that is now bothering him. In 2001, as a House member, he voted for the Patriot Act, the law that kicked open the door for mass data collection. When Edward Snowden revealed that the NSA was collecting Americans’ phone records by the millions, Graham didn’t seem alarmed. “I’m a Verizon customer. It doesn’t bother me one bit for the NSA to have my phone number,” he famously said. “I’m glad the NSA is trying to find out what the terrorists are up to overseas and in our country.” He later voted to codify those surveillance powers into Section 702 of the Foreign Intelligence Surveillance Act in 2008 and backed every major reauthorization since. For most of his career, Graham treated Section 702 like a sacred text. Whenever colleagues raised the idea of tightening controls or adding warrants for Americans’ data, he waved them off. “We can’t handcuff our intelligence community every time someone gets nervous about civil liberties,” he said in 2017, as if privacy itself were a form of weakness. In 2017, he supported a bill to make Section 702 permanent, with no sunset clauses or congressional review, a forever license to snoop. He brushed off critics. “You can’t live in a world where terrorists are trying to attack the country without some way to find out what they’re up to.” During the 2018 FISA Amendments reauthorization debate, Graham told colleagues, “You need to have the tools to find the terrorists before they hit us again.” He also took a particular interest in undermining encryption, the very technology that keeps ordinary citizens’ communications secure from government eyes. To those still wary about domestic abuse, he offered reassurance. “This is about foreign terrorists, not American citizens. It’s about stopping the next attack, not listening to your conversations.” Graham was already ignoring the fact that 702 was increasingly being used on American citizens, including members of Congress and judges. By 2020, when the powers came up for renewal once more, Graham was chairing the Judiciary Committee and still treating oversight like a nuisance. He blocked amendments that would have added warrant requirements and reminded everyone that, in his view, the stakes were existential. “Our intelligence professionals are the last line of defense between us and the next 9/11,” he said. “They need Section 702.” That was Lindsey Graham, who never imagined his own records could be pulled under those same authorities. Now, after years of helping build the panopticon, Graham is peering up from the inside of it. But his statements so far suggest a narrower goal: stopping his data from being collected, not anyone else’s. He’s still perfectly fine with government surveillance, just not when the target happens to have a Senate office. For years, Graham’s position on privacy was simple. If you’re not doing anything wrong, you’ve got nothing to worry about. It’s only now, after being reminded that “wrong” is defined by whoever’s holding the subpoena, that he’s learned what the rest of the country figured out long ago. When the machinery of surveillance turns on its makers, it rarely asks for permission. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Lindsey Graham Falls Prey to the Surveillance Monster He Championed appeared first on Reclaim The Net.