Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Judge Halts Colorado AI Law After First Amendment Challenge
Favicon 
reclaimthenet.org

Judge Halts Colorado AI Law After First Amendment Challenge

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A federal judge has frozen enforcement of Colorado’s first-in-the-nation AI law, the statute that would have required developers to police their own models for “algorithmic discrimination” and to inform the state of “foreseeable risks” before the rules took effect on June 30. Judge Cyrus Y. Chung signed off on a joint request from xAI and Colorado Attorney General Phil Weiser on April 27, putting the law on ice while state lawmakers draft a replacement. We obtained a copy of the order for you here. The order was filed in xAI v. Weiser. The state agreed not to enforce SB 24-205 against xAI, or to issue rules under it, until at least 14 days after the court rules on a forthcoming preliminary injunction motion. The June 16 scheduling conference was cancelled. The deadlines in the case are suspended. This is a significant retreat as Colorado spent two years insisting the law was a model for the country. It was the only state AI statute named in President Trump’s AI executive order last year. Now the state is asking a court to stop the clock while its own governor’s policy group drafts a bill to repeal and replace it. The law itself is the reason the climbdown looks the way it does. SB 24-205 told developers of “high-risk” AI systems they had to take “reasonable care” to prevent algorithmic discrimination, with one carveout that has done more work in the lawsuit than any other clause: the law exempts discrimination intended to “increase diversity or redress historical discrimination.” The state forbids one kind of discrimination by an algorithm. It permits, and arguably requires, another. The developer is left to figure out which is which, with the attorney general’s office deciding after the fact. xAI sued on April 9, calling the statute a First Amendment problem dressed up as consumer protection. The company’s complaint is more blunt than most filings of this kind. “SB24-205 is decidedly not an anti-discrimination law,” the company’s attorneys wrote. “It is instead an effort to embed the State’s preferred views into the very fabric of AI systems.” The argument is that Colorado isn’t regulating outputs neutrally. It’s choosing which viewpoints an AI model is allowed to produce, then enforcing the choice through “onerous policy, assessment, and disclosure requirements,” in the words of the Justice Department’s filing. The DOJ moved to intervene on xAI’s side, the first time the federal government has joined a constitutional challenge to a state AI regulation. Assistant Attorney General Harmeet K. Dhillon, who runs the Civil Rights Division, weighed in: “Laws that require AI companies to infect their products with woke DEI ideology are illegal.” You can take or leave the political register. The legal point underneath it is one anyone who cares about speech should take seriously. A state cannot tell a publisher, a newspaper, a search engine, or a chatbot which viewpoints its outputs must reflect. The First Amendment treats compelled speech as a near-cousin of censorship, and for the same reason: the government doesn’t get to write the script. Colorado’s law was vague enough to make almost any output a potential violation. The statute didn’t precisely define “algorithmic discrimination,” “foreseeable risks,” or what “reasonable care” looks like for a model with hundreds of millions of possible prompts. xAI’s complaint argues the statute is “unconstitutionally vague” and “invites arbitrary enforcement” because key terms are not defined. When a law is that loose, the chilling effect arrives before any enforcement does. Developers self-censor their models to stay on the safe side of a line the regulator hasn’t drawn yet. That self-censorship is the point, whether or not the law’s drafters intended it. A model that has to worry about Colorado’s interpretation of “disparate impact” will avoid topics, hedge answers, and decline questions. Colorado isn’t conceding any of this. The state lawmakers who backed the bill have pushed back. Rep. Brianna Titone, D-Arvada, a lead sponsor, told the Colorado Sun that SB 24-205 “is, and has always been, promoted as a policy to prevent and curtail discrimination for consequential decisions.” Rep. Manny Rutinel, D-Commerce City, accused the federal government of carrying water for Musk: “Coloradans deserve technology that works for everyone, not just billionaires.” Both responses sidestep the central issue. The question isn’t whether Colorado meant well. The question is whether a state can compel the speech of a software developer the way it just tried to. That power, once established, won’t stay limited to AI companies that the legislature dislikes. It will reach the next platform, the next publisher, the next set of opinions a future statehouse decides need correcting. The pause buys everyone time. Gov. Jared Polis’s AI policy group released a draft replacement bill on March 17, and the legislature is now preparing what would be the third round of amendments to a law that still hasn’t taken effect. Polis signed the original reluctantly in 2024, citing worries about the state’s tech sector. The original February effective date had already been pushed back to June 30 under industry pressure. This is a law that has been struggling to survive contact with reality from the day it was signed. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Judge Halts Colorado AI Law After First Amendment Challenge appeared first on Reclaim The Net.

Gottheimer and Lawler File Resolution, Encouraging Platforms to Censor Online Commentators
Favicon 
reclaimthenet.org

Gottheimer and Lawler File Resolution, Encouraging Platforms to Censor Online Commentators

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Two members of Congress have introduced a resolution pressuring social media and streaming platforms to deplatform two specific online commentators by name. Reps. Josh Gottheimer (D-N.J.) and Mike Lawler (R-N.Y.) filed H.Res.123, calling on tech companies to take “appropriate steps to enforce their policies against hate speech and prevent the spread of antisemitic content.” The two people the resolution singles out are Hasan Piker, a popular Twitch streamer, and Candace Owens, a popular political podcaster. The resolution arrives wrapped in language about combating antisemitism but it is nevertheless, a direct governmental push to deplatform commentary. The resolution claims antisemitic incidents have “significantly increased, including a 344 percent increase over the past 5 years, and [an] 893 percent increase over the past 10 years” and identifies online platforms as “a major vector for the spread of such hatred.” The fix, in the lawmakers’ framing, runs through corporate content rules. Lawler issued a statement laying out the case against each one. “Piker has openly applauded Hamas’ terrorism, downplayed the mass rape of civilians on October 7th, and dehumanized Orthodox Jews as ‘inbred,'” Lawler said. “Owens has trafficked in vile conspiracy theories, promoted blood libels, and platformed Holocaust deniers.” Whether you find those characterizations fair or not isn’t really the question. The question is what happens when sitting members of Congress use a House resolution to identify specific American citizens by name and ask private companies to remove their speech. There is a word for that mechanism. The Supreme Court has spent most of the last year wrestling with versions of it under the heading of jawboning. Gottheimer offered the moral framing. “Hatred is hatred, period,” he said. “We must stand up and speak out. I get that speaking up is not easy, but our constituents didn’t elect us to always take the easy path. That’s what principled leadership is all about.” Last year, Senate Democrats introduced their own resolution condemning Nick Fuentes and Tucker Carlson after Carlson interviewed Fuentes on his podcast. Every round picks new names but the idea stays the same. Congress identifies disfavored speakers, frames their speech as a category of harm that platforms have a duty to address, and lets corporate moderation do the work the First Amendment forbids the government from doing directly. The resolution itself is non-binding. Its purpose is to send a signal to platforms about which voices Congress would prefer to see disappear, with the implicit understanding that the lawmakers asking now will be the same ones writing rules later if the platforms don’t oblige. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Gottheimer and Lawler File Resolution, Encouraging Platforms to Censor Online Commentators appeared first on Reclaim The Net.

Cybersecurity Experts Demand Canada Scrap Bill C-22 Backdoor
Favicon 
reclaimthenet.org

Cybersecurity Experts Demand Canada Scrap Bill C-22 Backdoor

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Canada’s federal government is being asked to scrap Bill C-22 by a coalition that has grown to include 30 organizations and more than 20 cybersecurity experts. The open letter, published by the Global Encryption Coalition on April 28, 2026, lands one week after a separate group of 14 civil liberties organizations, refugee advocates, academics, and 15 of Canada’s most prominent privacy scholars sent their own demand for full withdrawal to Prime Minister Mark Carney and every Member of Parliament. The bill forces “electronic service providers” to install “technical capabilities” that hand law enforcement access to Canadian communications and sensitive data on demand. The signatories of the open letter want the legislation pulled, not amended, because Part 2 of the bill, the so-called Supporting Authorized Access to Information Act, cannot be fixed without abandoning its core purpose. That core purpose is breaking encryption. The signatories put the technical reality plainly. “There is no way to provide backdoor access to encrypted data and communications without compromising the privacy and security of millions of law-abiding citizens,” the letter states. The signatories include Jon Callas of Indiana University, John Gilmore who co-founded the Electronic Frontier Foundation, Susan Landau of Tufts University, and Eugene H. Spafford of Purdue, alongside organizations such as the Internet Society, the Tor Project, Tuta, OpenMedia, the Center for Democracy & Technology, and Fight for the Future. The Canadian government’s framing leans hard on a familiar reassurance. Public Safety Minister Gary Anandasangaree told an audience of police chiefs and law enforcement officials in March that the bill targets criminals, not ordinary citizens. “I want to be very clear about what C-22 is not. It is not about the surveillance of honest, hard-working Canadians going on about their daily lives,” Anandasangaree said. He added moments later, “We’re not looking for sneaky ways to surveil Canadians. We are doing our part to combat bad actors in both the physical and digital worlds.” What the minister described, however he labelled it, is a surveillance bill. C-22 compels electronic service providers to retain Canadian metadata for a year and gives police and CSIS new mechanisms to retrieve it. Location data, device identifiers, daily movement patterns. All of it is warehoused in advance, on every Canadian, regardless of whether anyone is suspected of anything. Location data alone tells a detailed life story: where someone sleeps, which doctor they see, which protests they attend, which church they walk into on a given day. Twelve months of that, sitting on private servers, organized for retrieval by the state. The bill does retreat from its predecessor. Bill C-2, which collapsed last year under opposition from rights groups, opposition parties, and industry, would have allowed police to ask any service provider, including those bound by professional privilege, whether someone was a client and where they connected from, all without a warrant. C-22 narrows that warrantless inquiry to telecommunications companies, and limits the question to a yes-or-no on client status. Anything further requires a warrant. Anandasangaree acknowledged the climbdown directly. “One thing I’ve learned is that at times when more work needs to be done on a particular bill, you retreat and you come back. You come back with better consensus, better consultation, and better supports from across the board,” he said. The retreat is a concession. The premise is not. Companies still have to pre-organize sensitive data on every Canadian on behalf of the state, and the bill’s most concerning section authorizes the Minister of Public Safety to issue secret orders forcing designated “core” electronic service providers, a category the government has not bothered to fully define, to build and maintain surveillance capabilities. The companies that receive these orders cannot tell anyone they received them. The government has written in a restriction saying the capabilities cannot create systemic vulnerabilities or weaken encryption, but that restriction is written by the same government that issues the secret orders, with no public accountability for how it gets applied. The open letter notes that those supposed protections are flimsy on their own terms. “Systemic vulnerability” is vaguely defined in the bill and “encryption” is not defined at all. The Governor in Council has wide remit to alter definitions and processes inside Part 2 after the fact, and the government has already admitted, on the record, that it is open to expanding C-22’s powers. Limited safeguards on a piece of surveillance legislation are not really safeguards if the people writing them say openly that they want them broader. The cybersecurity argument against backdoors has not changed in 30 years. Encryption is mathematics. It works for everyone or it works for no one. A backdoor that only the good guys can use does not exist and the people who keep insisting it must be possible are making a political argument dressed in technical language. The signatories point to recent history to show what happens when governments mandate access. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Cybersecurity Experts Demand Canada Scrap Bill C-22 Backdoor appeared first on Reclaim The Net.

The Browser Habits That Quietly Raise Your Airfare, and the Ones That Don’t
Favicon 
reclaimthenet.org

The Browser Habits That Quietly Raise Your Airfare, and the Ones That Don’t

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post The Browser Habits That Quietly Raise Your Airfare, and the Ones That Don’t appeared first on Reclaim The Net.

House Bill Cuts Federal Funds for Online Censorship
Favicon 
reclaimthenet.org

House Bill Cuts Federal Funds for Online Censorship

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A new House appropriations bill does something unusual for Washington legislation. It tells federal agencies they cannot spend money pressuring platforms, advertisers, or foreign governments to silence speech that Americans are legally allowed to make. H.R. 8595, the national security and State Department appropriations bill, runs hundreds of pages and buried throughout are provisions that would shut off federal funding to a wide range of speech-suppression activities. The restrictions cover direct platform pressure, ad boycott campaigns aimed at US media companies, blacklists, and cooperation with foreign censorship regimes that target American tech firms. We obtained a copy of the bill for you here. What the Bill Actually Stops The headline provision is on page 252. It bars the use of any appropriated funds to “deplatform, deboost, demonetize, suppress, or otherwise penalize” online speech, social media activity, or news outlets producing content that would be lawful under US law. The language is deliberately wide and it catches the obvious things, like government agencies asking a platform to take a post down, and the less obvious ones, like funding research projects that pressure advertisers to abandon publishers. That second category has been doing real damage for years. Brand “safety” programs, hate speech classifiers built with federal grant money, “disinformation” tracking outfits that exist primarily to attach scary labels to inconvenient reporting. Federal money cannot flow to programs designed to impose “legal, regulatory, financial, reputational, commercial, or political costs” on American tech companies, social media platforms, online intermediaries, or digital publishers for hosting First Amendment protected speech. There is also a prohibition on funding work that pushes foreign governments to do the censoring instead. American agencies cannot use these appropriations to support foreign laws, regulations, codes, or enforcement mechanisms that punish US platforms for carrying speech that would be lawful here. The whole architecture of routing American speech restrictions through Brussels or London or Canberra, then importing the results back home through global compliance regimes, runs into a federal funding wall. Blacklists are out. Censorship cooperation with supranational bodies is out. Inducing advertisers to “cut off, reduce, redirect, or otherwise interfere with advertising, sponsorship, payment, or other revenue on the basis of lawful online speech” is out. Protection for US Media and News Companies A separate section on page 99 builds a tighter ring around American media and news entities specifically. Federal funds cannot be used to push for the censorship of their social media content, to influence consumer or advertising behavior toward them, or to characterize US independent news organizations as producers of “disinformation, misinformation, or malinformation.” Those three terms have done enormous work over the past five years, and the bill treats them as the censorship vocabulary they are. Once an outlet gets labeled a disinformation source by a federally funded project, the consequences cascade; algorithmic suppression, ad networks pulling out, payment processors getting nervous. The bill cuts off the funding that powers the labeling machinery in the first place. Codifying the Anti-Censorship Executive Order Page 98 takes Executive Order 14149, President Trump’s “Restoring Freedom of Speech and Ending Federal Censorship” order, and locks parts of it into appropriations law. Funds cannot be spent in contravention of the order. Executive orders can be rescinded by the next administration with a signature. Appropriations restrictions are harder to dismantle. They get reviewed every funding cycle, and reversing them requires Congress to actively vote to put the censorship machinery back online. FOIA Improvements Page 90 contains provisions to speed up Freedom of Information Act response times. FOIA is the main legal mechanism Americans have for finding out what their government is actually doing and federal agencies have spent decades treating it as a nuisance to be slow-walked. Long delays have become a passive form of information control. Tightening response times pushes back against that. The One Exception The bill is not absolute on counter-speech work, though. A provision on page 98 authorizes “counter disinformation” programs in certain circumstances, but only narrowly. Appropriations for these programs “may only be made available for the purpose of countering such efforts by foreign state and non-state actors abroad.” The carve-out is geographic as well as directional. Funds can target foreign disinformation operations operating outside the United States but they cannot be turned inward on American speech, and they cannot be turned on Americans speaking abroad. The history of “counter disinformation” funding is that it tends to drift. Programs justified as targeting Russian or Chinese influence operations have repeatedly been documented working on domestic speech, often through contractors and NGOs that flag American journalists, researchers, and ordinary users. The narrow drafting here is an attempt to prevent that drift, though enforcement is the question. A prohibition only works if someone is willing to enforce it when the program inevitably starts catching Americans in its nets. Why This Bill Looks Different Most legislation touching online speech in the past decade has moved in one direction. More authority for governments and quasi-governmental bodies to determine what counts as acceptable expression. More pressure on platforms to comply. More funding for research outfits whose practical output is lists of accounts and outlets to suppress. H.R. 8595 inverts the pattern. It treats federal involvement in suppressing lawful speech as a problem to be defunded, names the specific tactics that have been used, and tries to block each one. The legislative text reads like it was written by someone who has watched the censorship apparatus grow over the past several years and wants to take its budget away one line item at a time. The bill is currently moving through the House. Whether the anti-censorship language survives reconciliation with the Senate, and whether agencies actually comply once the funding restrictions take effect, will determine how much of this becomes real protection rather than paper protection. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post House Bill Cuts Federal Funds for Online Censorship appeared first on Reclaim The Net.