Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Google and Substack Warn Britain Is Building a Censorship Machine
Favicon 
reclaimthenet.org

Google and Substack Warn Britain Is Building a Censorship Machine

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Major American companies and commentators, including Google and Substack CEO Chris Best, have condemned the United Kingdom’s censorship law, the Online Safety Act (OSA), describing it as a measure that risks censoring lawful speech while failing to make the internet safer for children. They argue that the law normalizes digital surveillance, restricts open debate, and complicates how global platforms operate in the UK. Their objections surfaced through The Telegraph, which published essays from Best and from Heritage Foundation researchers John Peluso and Miles Pollard, alongside new reporting on Google’s formal response to an Ofcom consultation. That consultation, focused on how tech firms should prevent “potentially illegal” material from spreading online, closed in October, with Ofcom releasing the submissions in December. Google’s filing accused the regulator of promoting rules that would “undermine users’ rights to freedom of expression” by encouraging pre-emptive content suppression. Ofcom rejected this view, insisting that “nothing in our proposals would require sites and apps to take down legal content.” Yet Google was hardly alone in raising alarms: other American companies and trade groups submitted responses voicing comparable fears about the Act’s scope and implications. Chris Best wrote that his company initially set out to comply with the new law but quickly discovered it to be far more intrusive than expected. “What I’ve learned is that, in practice, it pushes toward something much darker: a system of mass political censorship unlike anywhere else in the western world,” he said. Best describes how the OSA effectively forces platforms to classify and filter speech on a constant basis, anticipating what regulators might later deem harmful. Compliance, he explained, requires “armies of human moderators or AI” to scan journalism, commentary, and even satire for potential risk. The process, he continued, doesn’t simply remove content but “gates it” behind identity checks or age-verification hurdles that often involve facial scans or ID uploads. “These measures don’t technically block the content,” Best said, “but they gate it behind steps that prove a hassle at best, and an invasion of privacy at worst.” He warned that this structure discourages readers, reduces visibility for writers, and weakens open cultural exchange. Best, who emphasized Substack’s commitment to press freedom, said the OSA misdiagnoses the problem of online harm by targeting speech rather than prosecuting actual abuse or criminal behavior. “This is how you end up with ‘papers, please’ for the internet,” he wrote, warning that the law could become a model replicated by other governments. In its submission, Google contended that Ofcom’s interpretation of the OSA would “stifle free speech” by imposing vague obligations on platforms to police “potentially illegal” posts. It cautioned that these measures would “necessarily result in legal content being made less likely to be encountered by users,” extending the law’s reach beyond what lawmakers intended when it passed in 2023. Ofcom, meanwhile, pointed to incidents of online unrest such as posts that spread following the Southport killings and subsequent riots and protests to justify its approach. The regulator argued that recommender systems should withhold questionable material until moderators review it, to prevent harmful content from going viral during crises. Yet this example has since become contentious. Following those events, authorities made several arrests, including that of Lucy Connolly, under laws critics say were applied in an excessively heavy-handed way leading to international condemnation. The use of the Southport unrest to defend tighter speech controls has therefore raised further questions about how the government interprets and enforces the boundaries of “illegal” online communication. The regulator argued that recommender systems should temporarily withhold questionable material until moderators review it, to prevent viral dissemination of possible hate or violence. The OSA’s enforcement has created new friction between the UK and the US. Negotiations over a £31 billion technology partnership were recently frozen after Washington voiced concern about Britain’s direction on online regulation. US Vice President JD Vance has accused the UK of following a “dark path” on free speech, while Elon Musk’s X platform declared that “free speech will suffer” under the new rules. An Ofcom spokesperson reiterated that its goal is to protect both safety and liberty online, stating: “There is nothing in our proposals that would require sites and apps to take down legal content. The Online Safety Act requires platforms to have particular regard to the importance of protecting users’ right to freedom of expression.” However, this line of defense sidesteps the real issue. While Ofcom insists that legal material will not be removed outright, the regulator’s approach effectively requires platforms to limit how widely such content can spread. By obliging companies to restrict “potentially illegal” posts before any clear determination of their status, the policy would lead to broad suppression of lawful speech. The UK’s legal landscape already (and unfortunately) criminalizes categories of expression that would fall under constitutional protection in the United States. As a result, any automated or large-scale moderation system built to comply with the OSA may inevitably block lawful content in order to ensure that no illegal material slips through. The distinction between taking content down and throttling its visibility is, in reality, far narrower than the regulator is pretending. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Google and Substack Warn Britain Is Building a Censorship Machine appeared first on Reclaim The Net.

Bipartisan Bill Seeks to Repeal Section 230, Endangering Online Free Speech
Favicon 
reclaimthenet.org

Bipartisan Bill Seeks to Repeal Section 230, Endangering Online Free Speech

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A proposal in the US Senate titled the Sunset Section 230 Act seeks to dismantle one of the core protections that has shaped the modern internet. Put forward by Senator Lindsey Graham with bipartisan backing from Senators Dick Durbin, Josh Hawley, Amy Klobuchar, and Richard Blumenthal, the bill would repeal Section 230 of the Communications Act of 1934, a provision that has, for nearly thirty years, shielded online platforms from liability for the actions of their users. We obtained a copy of the bill for you here. Under the plan, Section 230 would be fully repealed two years after the bill’s passage. This short transition period would force websites, social platforms, and hosting services to rethink how they handle public interaction. The current statute stops courts from holding online platforms legally responsible as the publishers of material shared by their users. Its protection has been instrumental in allowing everything from local discussion boards to global platforms such as YouTube and Wikipedia to operate without being sued over every user comment or upload. The legislation’s text removes Section 230 entirely and makes “conforming amendments” across multiple federal laws. “I am extremely pleased that there is such wide and deep bipartisan support for repealing Section 230, which protects social media companies from being sued by the people whose lives they destroy. Giant social media platforms are unregulated, immune from lawsuits, and are making billions of dollars in advertising revenue off some of the most unsavory content and criminal activity imaginable,” said Senator Graham. “It is past time to allow those who have been harmed by these behemoths to have their day in court.” Senator Graham’s statement reflects growing political hostility toward Section 230, but the premise behind his argument collapses under close examination of how the law actually functions. The idea that repealing Section 230 would meaningfully hold large tech platforms accountable misunderstands both the legal structure of the internet and the purpose of the statute. Section 230 does not grant “immunity” in the sense that companies cannot be sued for their own actions. Platforms can and routinely are sued for violating federal criminal law, intellectual property rights, or contractual obligations. What the statute prevents is liability for speech created by its users. Without that safeguard, every website hosting user comments, reviews, or uploads would risk litigation for each post. A total repeal would not just affect Facebook or YouTube; it would reach tiny community forums, news sites with comment sections, local businesses that host user feedback, and nonprofit educational networks. The senator’s claim that platforms are “unregulated” also misses the regulatory reality. These companies already operate under extensive regimes such as privacy laws, consumer protection statutes, antitrust oversight, and criminal prohibitions. Section 230 does not exempt them from any of these. Instead, it ensures that the legal responsibility for online speech remains with the speaker, an essential distinction for protecting open communication. The notion that repealing the law would “allow those who have been harmed to have their day in court” ignores the consequence that every user would become a potential source of liability. Faced with such risk, platforms would have no practical choice but to prescreen or block vast categories of lawful expression to avoid any potential lawsuits. The outcome would not be a fairer digital environment but a heavily censored one, where only the most risk-averse, well-funded entities could afford to host public dialogue. From a free speech perspective, Section 230 is the legal backbone that allows a diverse internet to exist. It protects the capacity of ordinary people to speak, organize, and publish online without requiring corporate pre-approval. Dismantling it in the name of punishing “behemoths” would primarily hurt small and mid-sized sites that lack armies of lawyers. Rather than empowering individuals, a repeal would consolidate control of online discourse in the hands of a few large companies capable of absorbing the new legal exposure. Senator Blackburn’s claim that Big Tech uses Section 230 “to censor conservative voices” misunderstands both the law and the First Amendment. Section 230 does not require or authorize any specific content decision. It simply prevents lawsuits over moderation choices, whether those affect conservative, liberal, or apolitical content. Even though major social media platforms censored conservative voices over the last decade, repeal of Section 230 would not create political neutrality; it would compel platforms to err on the side of suppression, further constraining speech across the spectrum. Senator Blumenthal’s suggestion that companies “hide behind Section 230 to dodge accountability” overlooks existing accountability mechanisms. Platforms can already be sued for their own misconduct, such as defective design, deceptive practices, or failure to comply with federal reporting obligations. Section 230 only blocks suits that attempt to treat a platform as the publisher of another person’s speech, a boundary drawn to preserve open dialogue while still permitting enforcement of genuine legal violations. Graham went further on Fox News: “These platforms are doing enormous damage to our country, pushing people to suicide and selling fentanyl-laced pills and tablets,” Graham said. “It is long past time to open up the American courtroom to those who have been harmed by this out-of-control system, and to finally have regulations and accountability for the largest businesses in the history of the country. The courthouse doors are closed, and there is no meaningful regulation.” Senator Graham’s argument combines real public concerns with a deeply mistaken premise about how the internet and US law operate. The harms he lists, suicide, drug trafficking, and unregulated digital power, are serious, but none of them exist because of Section 230. The law he seeks to repeal is not what “closes the courthouse doors.” It is what keeps those doors from being used to silence lawful speech or destroy the open nature of online communication. First, the claim that “these platforms are doing enormous damage” rests on conflating correlation with causation. While social media may amplify allegedly “harmful” behavior, the existence of such content is not created by Section 230. The statute does not encourage or condone drug sales, harassment, or suicide-related material; it merely allocates legal responsibility correctly. Those who sell drugs or post illegal content are still fully liable under state and federal law. Section 230 does not obstruct prosecution or civil claims against the individuals and organizations that commit these crimes. Second, the idea that repealing Section 230 would “open up the American courtroom” ignores what that would mean in practice. Courts would indeed become more accessible to plaintiffs suing any website, app, or forum where another person’s illegal act occurred. A grieving parent, for instance, could sue not only the perpetrator but also the hosting service, the software developer, or even a search engine that indexed a link. Each suit would require platforms to defend themselves against the speech of third parties, regardless of whether they had any knowledge or control over the content. The result would be a legal system flooded with claims that punish the medium rather than the offender. Third, the suggestion that “there is no meaningful regulation” is inaccurate. Major platforms are already bound by extensive federal and state oversight: data privacy laws, advertising regulations, antitrust enforcement, securities disclosure rules, and criminal statutes concerning child exploitation and narcotics. Federal agencies, including the DEA and FBI, routinely use digital evidence hosted by platforms to be able to arrest and prosecute those selling fentanyl online. The existence of Section 230 does not limit these prosecutions; it ensures that intermediaries can cooperate with law enforcement without becoming liable for every crime that passes across their networks. If Section 230 were repealed, platforms would not become more accountable; they would become more restrictive. Legal exposure would force them to monitor and filter user activity on an unprecedented scale, removing controversial, sensitive, or even tragic personal content to avoid potential lawsuits. Far from opening access to justice, this would chill public discussion of addiction, mental health, and other social crises. What Senator Graham calls an “out-of-control system” is in fact an information ecosystem dependent on a single legal distinction: that people are responsible for what they say, and that the conduit carrying their speech is not the publisher of it. Erasing that line will not prevent tragedy. It will only replace open networks with a censored and legally paralyzed internet where fewer people dare to speak at all. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Bipartisan Bill Seeks to Repeal Section 230, Endangering Online Free Speech appeared first on Reclaim The Net.

A Privacy-Focused Browser Redefining Open Source Through Simplicity and Silence
Favicon 
reclaimthenet.org

A Privacy-Focused Browser Redefining Open Source Through Simplicity and Silence

In Silicon Valley, a pattern emerged. Corporations discovered that open source could double as public relations. They released code, declared transparency, and watched the world celebrate. Behind that applause, the same networks kept humming, feeding data to the same servers. The crowd saw freedom. The code quietly reported home. Amid that cycle, a small project appeared that refused to join the show. The browser is called Helium. It comes from the same Chromium base that powers Chrome, Edge, and Opera, yet it stands apart from them in purpose. The browser operates with a single principle: privacy without performance theater. Become a Member and Keep Reading… Reclaim your digital freedom. Get the latest on censorship, cancel culture, and surveillance, and learn how to fight back. Join Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post A Privacy-Focused Browser Redefining Open Source Through Simplicity and Silence appeared first on Reclaim The Net.

Pennsylvania High Court Rules Police Can Access Google Searches Without Warrant
Favicon 
reclaimthenet.org

Pennsylvania High Court Rules Police Can Access Google Searches Without Warrant

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The Pennsylvania Supreme Court has a new definition of “reasonable expectation.” According to the justices, it’s no longer reasonable to assume that what you type into Google is yours to keep. In a decision that reads like a love letter to the surveillance economy, the court ruled that police were within their rights to access a convicted rapist’s search history without a warrant. The reasoning is that everyone knows they’re being watched anyway. The opinion, issued Tuesday, leaned on the idea that the public has already surrendered its privacy to Silicon Valley. We obtained a copy of the ruling for you here. “It is common knowledge that websites, internet-based applications, and internet service providers collect, and then sell, user data,” the court said, as if mass exploitation of personal information had become a civic tradition. Because that practice is so widely known, the court concluded, users cannot reasonably expect privacy. In other words, if corporations do it first, the government gets a free pass. The case traces back to a rape and home invasion investigation that had gone cold. In a final effort, police asked Google to identify anyone who searched for the victim’s address the week before the crime. Google obliged. The search came from an IP address linked to John Edward Kurtz, later convicted in the case. It’s hard to argue with the result; no one’s defending a rapist, but the method drew a line through an already fading concept: digital privacy. Investigators didn’t start with a suspect; they started with everyone. That’s the quiet power of a “reverse keyword search,” a dragnet that scoops up the thoughts of every user who happens to type a particular phrase. The justices pointed to Google’s own privacy policy as a kind of consent form. “In the case before us, Google went beyond subtle indicators,” they wrote. “Google expressly informed its users that one should not expect any privacy when using its services.” The court took that disclosure, buried in the fine print of a sprawling legal document, as proof that users had signed away their Fourth Amendment rights. In another leap of reasoning, the opinion claimed that people could avoid creating data trails by choosing not to use the internet at all. “The data trail created by using the internet is not involuntary in the same way that the trail created by carrying a cell phone is,” the justices wrote. It’s an argument that only works if you believe modern life offers meaningful alternatives to being online. The court’s logic suggests that using Google is a choice, like deciding whether to join a bowling league. Viewed from a privacy perspective, the ruling reveals something deeper. By treating search history as a voluntary disclosure, the court framed internet use as a kind of public act. That logic ignores how fully online search has replaced libraries, maps, and even conversation. Suggesting that users can “opt out” of surveillance is like telling citizens to avoid speech if they don’t want it overheard. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Pennsylvania High Court Rules Police Can Access Google Searches Without Warrant appeared first on Reclaim The Net.

UK Police Pilot AI System to Track “Suspicious” Driver Journeys
Favicon 
reclaimthenet.org

UK Police Pilot AI System to Track “Suspicious” Driver Journeys

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Police forces across Britain are experimenting with artificial intelligence that can automatically monitor and categorize drivers’ movements using the country’s extensive number plate recognition network. Internal records obtained by Liberty Investigates and The Telegraph reveal that three of England and Wales’s nine regional organized crime units are piloting a Faculty AI-built program designed to learn from vehicle movement data and detect journeys that algorithms label “suspicious.” For years, the automatic number plate recognition (ANPR) system has logged more than 100 million vehicle sightings each day, mostly for confirming whether a specific registration has appeared in a certain area. Related: Surveillance on the Road: Why Britain’s Massive Camera Network Has Privacy Advocates on Edge The new initiative changes that logic entirely. Instead of checking isolated plates, it teaches software to trace entire routes, looking for patterns of behavior that resemble the travel of criminal networks known for “county lines” drug trafficking. The project, called Operation Ignition, represents a change in scale and ambition. Unlike traditional alerts that depend on officers manually flagging “vehicles of interest,” the machine learning model learns from past data to generate its own list of potential targets. Official papers admit that the process could involve “millions of [vehicle registrations],” and that the information gathered may guide future decisions about the ethical and operational use of such technologies. What began as a Home Office-funded trial in the North West covering Merseyside, Greater Manchester, Cheshire, Cumbria, Lancashire, and North Wales has now expanded into three regional crime units. Authorities describe this as a technical experiment, but documents point to long-term plans for nationwide adoption. Civil liberty groups warn that these kinds of systems rarely stay limited to their original purpose. More: London’s Surveillance Scheme Rakes in Millions While Failing the Community Jake Hurfurt of Big Brother Watch said: “The UK’s ANPR network is already one of the biggest surveillance networks on the planet, tracking millions of innocent people’s journeys every single day. Using AI to analyse the millions of number plates it picks up will only make the surveillance dragnet even more intrusive. Monitoring and analysing this many journeys will impact everybody’s privacy and has the potential to allow police to analyse how we all move around the country at the click of a button.” He added that while tackling organized drug routes is a legitimate goal, “there is a real danger of mission creep – ANPR was introduced as a counter-terror measure, now it is used to enforce driving rules. The question is not whether should police try and stop gangs, but how could this next-generation use of number plate scans be used down the line?” The find and profile app was built by Faculty AI, a British technology firm with deep ties to government projects. The company, which worked with Dominic Cummings during the Vote Leave campaign, has since developed data analysis tools for the NHS and Ministry of Defence. Faculty recently drew attention after it was contracted to create software that scans social media for “concerning” posts, later used to monitor online debate about asylum housing. Faculty declined to comment on its part in the ANPR initiative. Chief constable Chris Todd, chair of the National Police Chiefs’ Council’s data and analytics board, described the system as “a small-scale, exploratory, operational proof of concept looking at the potential use of machine learning in conjunction with ANPR data.” He said the pilot used “a very small subset of ANPR data” and insisted that “data protection and security measures are in place, and an ethics panel has been established to oversee the work.” William Webster, the Biometrics and Surveillance Camera Commissioner, said the Home Office was consulting on new legal rules for digital and biometric policing tools, including ANPR. “Oversight is a key part of this framework,” he said, adding that trials of this kind should take place within “a ‘safe space’” that ensures “transparency and accountability at the outset.” A Home Office spokesperson said the app was “designed to support investigations into serious and organised crime” and was “currently being tested on a small scale” using “a small subset of data collected by the national ANPR network.” From a privacy standpoint, the concern is not just the collection of travel data but what can be inferred from it. By linking millions of journeys into behavioral models, the system could eventually form a live map of how people move across the country. Once this analytical capacity becomes part of routine policing, the distinction between tracking suspects and tracking citizens may blur entirely. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Police Pilot AI System to Track “Suspicious” Driver Journeys appeared first on Reclaim The Net.