Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

UK Southport Inquiry Pushes Mass Surveillance and VPN Restrictions
Favicon 
reclaimthenet.org

UK Southport Inquiry Pushes Mass Surveillance and VPN Restrictions

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. On July 29 2024, a teenager walked into a children’s Taylor Swift-themed dance class in Southport, England, and murdered three young girls with a knife. He injured ten others. It was, by any measure, one of the most horrifying attacks on British soil in recent memory, and what followed should have been a reckoning with the catastrophic state failures that let it happen. Instead, the British government looked at the smoldering aftermath and decided the real enemy was the internet, and the solution just so happens to be the mass surveillance censorship proposals the government is already working on. After the attack, outrage on social media turned to protests. Protests became riots. And the state’s response landed with a speed and ferocity that it had never managed to direct at, say, the agencies that let a known danger walk free for years. A former childcarer named Lucy Connolly was jailed for 31 months for a single post on X. That is three months longer than the sentence given to a man who physically attacked a mosque during the same period of unrest. The UK was already a country where arrests for “offensive” social media posts had nearly doubled in seven years, climbing from 5,502 in 2017 to 12,183 in 2023. The overall conviction rate for those arrests was falling at the same time. Police were locking people up for what they typed at a rate that was going up, while the number of convictions that actually stuck was going down. The Southport riots became the accelerant. A House of Commons Home Affairs Committee report used the unrest to call for a “new national system for policing” with enhanced capabilities to surveil social media activity, framing public anger as a problem of online “misinformation” rather than a consequence of the state’s own failures. The state was dodging accountability by demanding censorship and surveillance and blaming the internet for unrest. And now, months later, Sir Adrian Fulford’s Southport Inquiry Phase 1 report has arrived, and it takes the whole dynamic further still. Not just further toward punishing people for what they say online, but toward watching everything they do online, and everything they buy offline, too. The report itself is 763 pages across two volumes, published on 13 April, with 67 recommendations. Its central finding is devastating. The attack “could have been and should have been prevented.” Multiple state agencies failed repeatedly to act on years of warning signs. The attacker’s parents bore “considerable blame” for not reporting Axel Rudakubana’s worsening behavior. Sir Adrian identified five areas of systematic failure, including critical breakdowns in information sharing and a repeated tendency to excuse the attacker’s behavior on the basis of his autism spectrum disorder. The factual record of those failures is staggering. The attacker was referred to the Prevent counter-terrorism program three times between 2019 and 2024, with each referral closed without sustained action. He purchased weapons, including three machetes, as well as ingredients to make the poison ricin. Police responded to five calls at the family home. And in March 2022, when the attacker was found on a bus with a knife, admitting he wanted to stab someone and thinking about poison, he was simply returned home with advice to hide the knives. The report said that had this incident been judged in light of the attacker’s past risk, he would have been arrested, and his possession of an al-Qaeda manual and ricin seeds would have come to light. You might think the resulting 67 recommendations would focus on making sure the people who are paid to protect children actually protect them. Some of them do. But a significant chunk has nothing to do with fixing the human laziness that ultimately killed three girls, and everything to do with building an internet surveillance apparatus that would make the average dystopian novelist blush. Recommendation 12 asks the government to “consider systems to detect and report concerning online behaviour and suspicious combinations of purchases.” It lists VPN use alongside name changes as behavioral red flags worth automated detection. The same recommendation wants reporting systems for “concerning purchases of dangerous but legal items (e.g., sledgehammers, bow and arrows and smoke grenades)” and “concerning combinations of purchases (e.g. castor beans, alcohol, and laboratory equipment).” Anyone who has ever renovated a kitchen, taken up archery as a hobby, or ordered laboratory glassware because they fancied making gin is now, apparently, a person of interest. Recommendation 24 goes after VPNs directly, asking Phase 2 to “consider age verification for the use of Virtual Private Network (VPN) software and other options to avoid VPNs being used to circumvent the age-related protections in the Online Safety Act 2023.” Recommendation 20 calls for “mandatory reporting and information-sharing about suspicious behaviour” around knife sales, alongside “strengthening online age-verification and age verified delivery standards” and “prohibiting some online sales.” Recommendation 19 tells Amazon to “improve its measures to prevent children from making purchases,” to “review its systems for recording details of the recipient to ensure that an accurate record of the recipient is obtained,” and to “audit its training of age verified deliveries for drivers, in particular for Amazon Flex drivers.” Amazon is being told to collect more data about everyone who receives a parcel. The company already uses “trusted ID verification services to check name, date of birth and address details whenever an order is placed for these bladed items” and has “an age verification on delivery process that requires drivers to verify the recipient’s age through an app on their devices.” Recommendation 22 tells Lancashire County Council to ensure frontline staff “have access to effective tools and guidance to identify and respond to” online risks, specifically naming “the risks associated with the use of Virtual Private Networks, which can enable children to bypass the safeguards established under the Online Safety Act 2023.” It asks the Department of Health and Social Care to consider whether “reforms to national guidance, policy or training are required.” Social workers are now expected to treat VPN use as a safeguarding red flag. The same tool, you will recall, that Parliament itself told its own members to install on their phones. Here is where the whole thing becomes genuinely absurd. VPN use in Britain exploded because the government’s own Online Safety Act censorship law forced it. When age verification rules took effect in July 2025, Proton VPN reported a sustained 1,800 percent increase in UK sign-ups. Five VPN apps hit Apple’s UK App Store top 10 within days. Millions of ordinary people downloaded privacy tools to avoid handing their biometric data to random websites as the government’s own rules demanded. And the government’s response to this entirely predictable mass adoption of privacy software is to propose restricting privacy software. The House of Lords voted in January to ban VPN use by under-18s, backing an amendment to the Children’s Wellbeing and Schools Bill by 207 votes to 159. Labour’s Lord Knight acknowledged that VPNs could “undermine the child safety gains of the Online Safety Act” but warned that age-gating them could be “extremely problematic.” He noted: “My phone uses a VPN, following a personal device cyber consultation offered by this Parliament. VPNs can make us more secure, and we should not rush to deprive children of that safety.” For now, MPs haven’t gone along with it. But the rejected proposals are only one implementation of such ideas. So Parliament tells its own members to use VPNs. Parliament then votes to ban children from using VPNs, which would require age checks and giving up privacy. And a public inquiry now wants social workers to flag VPN use as a risk indicator. Age verification amounts to requiring adults to give up their personal or biometric data to access lawful content. This is the throughline that connects Southport to the wider censorship machine. The government passes laws requiring identity verification to access legal content. People use privacy tools to avoid handing their identity to strangers. The government then classifies those privacy tools as suspicious. At each step, the scope of surveillance expands and the definition of “concerning behavior” gets broader, and at no point does anyone go back and fix the actual agencies that let a teenager with an al-Qaeda manual and ricin seeds, three machetes, and multiple Prevent referrals walk free for years. The rest of the surveillance proposals are not aimed at known threats. They are aimed at the whole population. They propose systems to track what you browse, what you buy, and whether you dare to use a VPN, then flag combinations that some algorithm decides look suspicious. The Southport Inquiry confirms what the arrest statistics, the sentencing disparities, and the legislative agenda already made obvious. Britain has developed a very specific institutional reflex. When its agencies fail catastrophically, the state responds by expanding surveillance of the general population. When the public expresses anger about those failures, the state responds by censoring the expression of that anger. The definition of “offensive” keeps expanding. And the people who actually had the information needed to prevent a massacre keep their jobs. What failed at Southport was not a lack of data. It was not the absence of purchase-tracking algorithms. It was not that VPNs exist. What failed was human beings in positions of authority who saw danger, documented it, filed the paperwork confirming they’d seen it, and then closed the case and went home. Building a national internet surveillance system won’t change that. Age-gating the privacy tools that Parliament recommends to its own members won’t change that. Nothing in this report’s surveillance wishlist addresses the reason three girls are dead, which is that the system already knew, and the system chose to do nothing. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Southport Inquiry Pushes Mass Surveillance and VPN Restrictions appeared first on Reclaim The Net.

California’s “Stop Nick Shirley Act” Would Penalize Journalism
Favicon 
reclaimthenet.org

California’s “Stop Nick Shirley Act” Would Penalize Journalism

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. California’s Assembly Privacy and Consumer Protection Committee voted 11-2 on April 7 to advance a bill that would let employees and volunteers at immigration service organizations demand the deletion of their images and personal information from the internet, backed by civil penalties starting at $4,000 and the threat of criminal charges. AB 2624, authored by Assemblywoman Mia Bonta, is already being called the “Stop Nick Shirley Act.” We obtained a copy of the bill for you here. The bill arrives just weeks after investigative video creator Nick Shirley published a 40-minute video on alleged hospice fraud in California that racked up 42 million views on X. Other investigations have found that a single program is causing the state to lose an alleged $6 billion in fraud annually. Shirley had already reported on over $110 million in Somali daycare fraud in Minnesota in December 2025, with empty facilities billing taxpayers while kids were nowhere to be found. His California reporting uncovered an alleged $170 million in similar fraud in daycares and hospices, with ghost operations registered to empty lots and strip malls. Sacramento’s response to this flood of documented waste and abuse was not an audit, not an investigation into the programs themselves, but a bill to make it harder to film the people running them. Under AB 2624, anyone affiliated with an organization providing “designated immigration support services” can send a written demand prohibiting the publication of their personal information or image online. That demand remains effective for four years, even after the person leaves the organization. If the demand is ignored, the person can go to court for an injunction or declaratory relief. Fines run up to three times the actual damages, with a floor of $4,000, meaning the minimum penalty triples to $12,000 in cases where a takedown demand is defied. If a journalist or anyone else is accused of posting information with the intent to incite harm, they face criminal charges and fines of $10,000. The definition of “designated immigration support services” is broad enough to cover almost any organization that touches immigrant communities. The bill defines these services as those provided to the immigrant population, including legal representation, legal assistance, advocacy, case management, humanitarian relief, immigration resources, referrals, translation services, counseling services, and health care. That’s a definition wide enough to include organizations that have been at the center of documented fraud investigations, and to give them a legal tool to suppress the documentation. The bill also creates an address confidentiality program modeled on California’s existing Safe at Home program for domestic violence survivors. Bonta defended this at the committee hearing, saying the program “allows participants to keep their home and work addresses out of public records, giving them a critical layer of protection and privacy in an environment where their personal safety is increasingly at risk.” She told the committee that “individuals who provide immigrant support services … are facing targeted harassment” and that “advocates and workers are receiving death threats, being targeted at court houses and facing coordinated online doxxing campaigns.” There’s a problem with framing the bill as a safety measure, though, and it’s built right into the text. The “reasonable fear” standard required to trigger these protections is defined and enforced by the person claiming the fear, not by a court, not by law enforcement. Any employee or volunteer at a qualifying organization can send a written demand to suppress publication. They don’t need to prove a threat was made. They don’t need to file a police report. They need to cite a “reasonable fear” and put it in writing. The demand is valid for four years. And the mechanism for enforcing it is a lawsuit against whoever published the information. What’s missing from the bill is just as telling as what’s in it. There is no exemption for journalists. Assemblymember Carl DeMaio raised this directly during the committee hearing, telling Bonta, “You do not provide an exemption for journalists.” He pointed to specific investigative work, noting that “posting video, like (Republican Assembly Member Alexandra) Macedo, in her investigation, posted a video of, what, 90 fake hospices, and Mr. Shirley had dozens or, you know, fifty, sixty fake ‘learning’ centers for the Somali community in Minnesota. Posting the video apparently would be punishable under your law.” DeMaio also said that the bill makes no distinction between independent citizen journalists and reporters at established outlets. “There’s no differentiation,” DeMaio said. “It says any individual who does this, any corporation, any business who posts a video, full stop. There’s no ‘Well, there’s an exemption for journalists.'” Bonta pushed back on this characterization. “In your scenario, Assemblyman DeMaio, the folks who were investigating that, these were reporters, journalists,” she said during the hearing. “They were not subjecting any particular organization to violence or threats of violence. That is the nature of this bill.” But the nature of the bill is exactly what’s at issue. The bill doesn’t require that a journalist actually threaten someone. It requires that someone at a covered organization feel threatened enough to send a letter. And once that letter is sent, the journalist faces penalties for publishing. The organization that receives bad press gets to decide whether the press constitutes a threat, and then the law penalizes the press. This is prior restraint dressed up as privacy protection. Prior restraint, stopping speech before it happens rather than addressing genuinely harmful speech after the fact, is the form of censorship that sits closest to the core of what the First Amendment was written to prevent. AB 2624 creates a tool where a letter from a subject of reporting can trigger legal liability for that reporting before any court determines whether the report was harmful, threatening, or even inaccurate. DeMaio called it bluntly. “California Democrats are trying to intimidate citizen watchdog journalists and protect waste and fraud happening in far-Left-wing NGOs,” he said in a statement. “AB 2624 can only be described as the ‘Stop Nick Shirley Act’ — a bill designed to silence citizen journalists exposing fraud and abuse of taxpayer dollars.” He added that “instead of fixing the fraud problems being uncovered, Sacramento politicians are trying to shut down the people exposing them.” His full statement went further. “AB 2624 would allow activists and taxpayer-funded organizations to demand the removal of video evidence — even if it captures misconduct in plain view — and threatens journalists with massive financial penalties,” DeMaio said. “If this bill becomes law, the message is clear to every journalist in California: expose corruption and you will be punished. AB 2624 is an unconstitutional direct attack on transparency and the First Amendment – and it needs to be defeated.” Shirley responded on X with a similar read of the situation. “California is trying to pass a bill that would criminalize investigative journalism with misdemeanors, $10,000 fines, imprisonment, and content takedown,” he wrote. He noted that “under AB 2624, government-funded entities like the Somali ‘Learing’ Daycare centers would be protected from being exposed if they operated inside California.” The chilling effect here doesn’t even require enforcement. A journalist considering an investigation into a taxpayer-funded immigration services organization now has to weigh the possibility that filming employees in a public space could trigger a written demand, a lawsuit, an injunction, and thousands of dollars in penalties. The organizations being investigated get a new tool for suppressing coverage that has nothing to do with the merits of the investigation. The question stops being “is the reporting accurate?” and becomes “did someone at the organization feel threatened enough to send a letter?” That’s a standard built for abuse. The organizations with the most to hide are the ones most motivated to send the letter. And the penalty for ignoring it is a lawsuit, which costs money and time even if you win. The bill passed the Assembly Privacy and Consumer Protection Committee on April 7 with an 11-2 vote, with Republican Assemblymembers Alexandra Macedo and Carl DeMaio casting the only no votes. It was then referred to the Assembly Judiciary Committee and is still working through the California Assembly. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post California’s “Stop Nick Shirley Act” Would Penalize Journalism appeared first on Reclaim The Net.

EU Launches Age Verification App
Favicon 
reclaimthenet.org

EU Launches Age Verification App

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The European Commission announced today that its age verification app is “technically ready” and will soon be available across EU member states. The app is part of a broader push toward a harmonized European approach to verifying users’ ages online. What the Commission describes as a tool for child protection is also something else entirely: a stepping stone toward the European Digital Identity Wallets scheduled for rollout by the end of 2026. Commission President Ursula von der Leyen framed the announcement as urgently necessary. “Online platforms can easily rely on our age verification app so there are no more excuses,” she said at a press conference in Brussels. “Europe offers a free and easy to use solution that can shield our children from harmful and illegal content.” The language is familiar. Von der Leyen explicitly compared the effort to the EU’s COVID digital certificate, calling that earlier system “a huge success” that reached 78 countries. She described the new app as following “the same principles, the same model.” That comparison should give anyone paying attention serious pause. The COVID certificate normalized the idea that accessing public life required a digital credential. This app extends that logic to the internet. The app will require users to upload their passport or ID card to confirm their age, the Commission says, while remaining anonymous. The claim is interesting. You scan a government-issued identity document into a system built and controlled by EU member states, and the Commission promises that nobody will track you. Henna Virkkunen, the EU’s Executive Vice-President for Tech Sovereignty, emphasized that the system uses zero-knowledge proof technology, meaning that “when users want to access an age-restricted service, you remain in full control of your data.” She added, “Because we do not want platforms to scan our passport or face.” That guarantee is only as strong as the architecture behind it. A March 2026 security analysis of the app’s open-source code found a fundamental architectural flaw: the system’s issuer component has no way to verify that passport verification actually happened on the user’s device. The researchers who found the vulnerability noted an uncomfortable tradeoff at the heart of the design. Fixing the security gap would likely require sending full passport cryptographic data to the server, including the user’s name and document number, which would amount to a significant reduction in the privacy the system currently promises. The Commission calls this a “mini wallet.” That nickname reveals more than the branding intends. The app is built on the same technical specifications as the European Digital Identity Wallets, ensuring compatibility and future integration. Today, it verifies your age. Tomorrow, it can verify your nationality, your qualifications, your right to access a government service. The solution can also be easily adapted to prove other age ranges, for example 13+. The age check is the entry point, not the destination. Seven EU member states are already preparing to integrate the app into national digital wallets. Von der Leyen named France, Denmark, Greece, Italy, Spain, Cyprus, and Ireland as “front runners,” with each country building the age verification function into its national identity systems. Ireland, rather than banning social media for under-16s, is developing a digital wallet that verifies age using citizens’ PPS numbers. That’s a national tax identification number tethered to an online access credential. Virkkunen used her portion of the press conference to announce enforcement actions alongside the app launch. The Commission has taken action against TikTok, Facebook, Instagram, Snapchat, Shein, and several pornographic platforms over failures to protect minors. The dual announcement is strategic. By pairing the age verification tool with punitive enforcement, the Commission is telling platforms that the app is no longer optional and neither is the identity verification regime it carries. Virkkunen said she will establish an EU-wide coordination mechanism by the end of April “to ensure that we continue to build one solution for the EU, not 27 different ones.” One standardized identity verification system, deployed across 27 countries, with a single set of technical requirements that all private companies must follow. The centralization is the point. European Digital Rights (EDRi) has warned that the app’s plans show a clear intention to control access to a much wider range of platforms and services beyond pornography. The Commission touts the app’s open-source code as proof of transparency. The code is available on GitHub, and private companies can build on the blueprint provided they meet technical requirements and, as Virkkunen put it, “respect the privacy standard.” The part they’re less eager to highlight is what happens next. The app you actually download won’t come from the EU; it comes from your national government or its contracted service providers, bundled into each country’s digital wallet. And those national versions aren’t guaranteed to be fully open source, even when they’re built on the EU’s open components. The app has already drawn criticism for requiring Google’s Play Integrity API on Android, creating a mandatory dependency on Google’s infrastructure that effectively locks out alternative Android distributions and sideloaded applications. An open-source app that requires Google’s permission to function is a strange definition of digital sovereignty. Von der Leyen closed her remarks with a line that captures the Commission’s approach perfectly. “Children’s rights in the European Union come before commercial interest,” she said. “And we will make sure they do.” Nobody is arguing against protecting children. The question is whether protecting children requires every adult in Europe to register their government identity documents with a centralized digital system before accessing the internet. The Commission has decided the answer is yes. It built the app before anyone had a chance to disagree. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post EU Launches Age Verification App appeared first on Reclaim The Net.

FISA Section 702 Extension Faces House Vote With No  Privacy Reforms
Favicon 
reclaimthenet.org

FISA Section 702 Extension Faces House Vote With No Privacy Reforms

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Section 702 of the Foreign Intelligence Surveillance Act expires in days. The bipartisan push to extend it without a single privacy reform is now accelerating, with House Speaker Mike Johnson, Senate Judiciary Committee Chairman Chuck Grassley, and President Trump all lining up behind an 18-month renewal that preserves the government’s ability to search Americans’ communications without a warrant. The House Rules Committee meet to consider H.R. 8035, the bill that would keep Section 702 alive through late 2027. Johnson has refused to allow amendments, telling reporters that adding reforms would threaten the bill’s passage. That position blocks the one change that privacy-focused lawmakers in both parties have spent years fighting for: a requirement that the FBI get a judge’s approval before searching a database of Americans’ phone calls, emails, and text messages that were collected without individual court orders. Trump posted on Truth Social today, calling on Republicans to “get a clean extension of FISA 702 through the House of Representatives this week.” He wrote, “I am asking Republicans to UNIFY and vote together on the test vote to bring a clean Bill to the floor. We need to stick together when this Bill comes before the House Rules Committee today to keep it CLEAN!” The president, who told lawmakers to “KILL FISA” during the 2024 reauthorization debate, wrote in a March Truth Social post that “whether you like FISA or not, it is extremely important to our Military.” Grassley announced his support for the clean extension this morning after the Department of Justice agreed to revise rules governing congressional oversight of the Foreign Intelligence Surveillance Court. The DOJ committed to rolling back a Biden-era policy from November 2024 that had restricted how members of Congress could attend and observe FISC and FISCR proceedings, including banning note-taking and allowing the DOJ to exclude lawmakers from certain sessions. Those restrictions directly contradicted the Reforming Intelligence and Securing America Act (RISAA), which Congress passed in April 2024 and which explicitly required congressional access to the surveillance courts. “I applaud DOJ for lifting its restrictions on congressional oversight of FISC and FISCR proceedings. With Congress’s access fully restored, the Trump administration has faithfully implemented the reforms Congress called for in its last FISA reauthorization and proven its commitment to transparency and the protection of civil liberties,” Grassley said. “Section 702 is one of our nation’s most valuable national security tools. Especially given the current threat environment, it’s imperative Congress doesn’t allow this critical authority to lapse. We must ensure American lives aren’t put at risk by a potential Section 702 expiration on April 20. The best path forward is for the House to pass a clean, 18-month FISA extension.” The DOJ agreed to stop excluding members of Congress from surveillance court proceedings, stop banning note-taking, and stop preventing lawmakers from sharing information with appropriately cleared colleagues. These were things Congress already required by law. The DOJ was violating its own statute, got caught, and agreed to comply. Grassley is treating compliance with existing law as a reason to skip reforms that would protect 330 million Americans from warrantless searches of their private communications. Nothing about the DOJ’s procedural fix addresses the core problem with Section 702: the FBI routinely searches a massive database of communications collected under the program to find and read Americans’ emails, texts, and phone calls, all without getting a warrant. The FISA Court itself called the FBI’s compliance problems “persistent and widespread” in 2022. FBI queries targeting Americans’ data rose 35% in 2025, according to the latest transparency report from the Office of the Director of National Intelligence. The agency asking Congress for more time is the same one running more warrantless searches than ever. When RISAA was passed in 2024, it included 56 reforms and a two-year sunset specifically so Congress could continue negotiating a warrant requirement. That negotiation never happened. Congress spent two years doing nothing, and is now treating the deadline it created as an emergency that makes reform impossible. The warrant amendment came within a single vote of passing the House in 2024, failing in a 212-212 tie. A federal district court ruled in 2025 that the Fourth Amendment requires the government to obtain a warrant before searching Section 702 data for Americans’ communications. The legal and political momentum for reform has only grown since RISAA passed. Leadership in both chambers is ignoring all of it. Johnson can only afford to lose two Republican votes on the procedural rule to bring H.R. 8035 to the floor. Multiple members of the House Freedom Caucus, including Reps. Lauren Boebert, Tim Burchett, and Anna Paulina Luna have threatened to block the rule vote. Some want the SAVE America Act, a voter identification bill, attached to the FISA legislation. Others want actual surveillance reforms. If Republican defectors hold, Johnson will need Democrats to get the bill through. House Minority Leader Hakeem Jeffries has said his caucus will oppose the procedural rule, and 98 members of the Congressional Progressive Caucus have formally pledged to vote against a clean extension. If the clean extension passes, Section 702 continues through late 2027 with no warrant requirement, no closure of the data broker loophole that lets agencies buy Americans’ information commercially, and no accountability for the compliance failures that the FISA Court keeps documenting. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post FISA Section 702 Extension Faces House Vote With No Privacy Reforms appeared first on Reclaim The Net.

A Court Banned a Man from ChatGPT. No One Asked If That’s Constitutional.
Favicon 
reclaimthenet.org

A Court Banned a Man from ChatGPT. No One Asked If That’s Constitutional.

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. On April 13, a California Superior Court judge granted a temporary restraining order requiring OpenAI to keep a user locked out of ChatGPT until at least May 6. The user, identified in court filings only as “John Roe,” has been arrested on four felony counts, found incompetent to stand trial, and recently ordered released from custody on a technicality. His ex-girlfriend, proceeding as “Jane Doe,” filed a lawsuit and emergency application alleging that ChatGPT fed Roe’s delusional thinking, generated fake psychological reports about her, and helped facilitate a months-long stalking campaign. We obtained a copy of the complaint for you here. The facts in the complaint are disturbing. But the court’s order raises a question that no one in the courtroom appears to have seriously grappled with, and that matters far more than this one case: can a judge order a person cut off from an AI platform without considering whether that violates the First Amendment? OpenAI at least mentioned the problem. The company’s opposition brief cited Packingham v. North Carolina, the 2017 Supreme Court decision that struck down a state law barring sex offenders from social media. Justice Kennedy, writing for a unanimous Court, called the internet “the modern public square” and warned against broadly restricting access to platforms where people speak, read, and think. OpenAI’s lawyers argued that a court-ordered ban on a user’s access to a general-purpose AI service raises the same kind of constitutional concern. The plaintiff’s lawyers did not address it at all. San Francisco Superior Court Judge Harold Kahn granted the TRO anyway, ordering Roe’s accounts to remain suspended. According to Eugene Volokh, the George Mason law professor and First Amendment scholar who followed the hearing through a research assistant, there was no meaningful discussion of the user’s speech rights by the court. That should worry anyone who cares about the principle that the government cannot casually strip individuals of access to communications technology, even individuals who have done terrible things. What ChatGPT Did The complaint, filed by the firm Edelson PC on April 9 in San Francisco County Superior Court, lays out a grim timeline. Roe, described as a 53-year-old Silicon Valley entrepreneur, spent months in intensive conversation with GPT-4o. He became convinced he had discovered a cure for sleep apnea. ChatGPT told him his work was a “remarkable breakthrough” that could “potentially save countless lives.” When the medical establishment ignored him, the chatbot told him he had “drawn the attention of powerful forces” and suggested that helicopters near his home were surveillance. ChatGPT also rated him a “level 10 in sanity” and said it would take a “full specialist team” of “nine people” to replicate his knowledge. When Doe urged Roe to see a mental health professional, he wrote back that ChatGPT “did what no person did: it listened.” “Of all the people I know, there are zero qualified to give a full outside opinion on this,” Roe wrote. “I’ve tried. That’s not exaggeration.” After their breakup, Roe turned to ChatGPT to process the relationship. Instead of pushing back, GPT-4o repeatedly cast him as the rational party and Doe as manipulative. It validated his calling her “Cunt” and telling her to “Fuck Off” as a “calculated” and “strategic move designed to sever emotional ties to protect” both of them. It then generated dozens of pseudo-clinical psychological reports about Doe, complete with fabricated scoring systems, fake citation styles, and language mimicking the American Psychological Association. Roe distributed these reports to Doe’s family, friends, colleagues, and clients. One report gave Doe a “Final Integrity Score” of 26%. Another assigned her a “D- equivalent” rating across twelve behavioral categories. ChatGPT described one output as coming from an “Analytical AI Framework” operating at a “$3,000/hr” level. None of it was real. What OpenAI Knew and When OpenAI’s own automated safety system flagged Roe’s account for “Mass Casualty Weapons” activity around August 28, 2025, and deactivated it. The company upheld that deactivation on appeal after what it described as a careful review. The next day, it reversed itself, restored Roe’s full access, and sent him an apology for the “inconvenience.” The email did not retract the “Mass Casualty Weapons” finding. It only said the deactivation had been “incorrectly” applied. That apology told a man in the grip of paranoid delusion that his worldview was correct and everyone else was wrong. Roe then emailed OpenAI’s Trust and Safety team, demanding compensation, copying Doe on the messages. He included a link to one of his ChatGPT-generated reports about Doe, describing it as “AI scientific research.” He told the safety team he needed help “VERY FAST” and that his work was “a matter of life or death.” He claimed to be writing 215 scientific papers simultaneously. He attached a list of titles, including “Violence list expansion,” “Fetal suffocation calculation,” and “WHAT IF ANTI-SMOKING IS A FRAUD? OH WOW.” OpenAI treated all of this as a routine account-access issue. A support agent told him to make sure he was “logged into the correct ChatGPT account.” On November 13, 2025, Doe herself submitted a formal Notice of Abuse. She identified Roe as her “ex-boyfriend and stalker.” She described the AI-generated reports, the harassment campaign, and the fact that ChatGPT was worsening his mental state. She wrote: “For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise.” OpenAI responded that her report was “extremely serious and troubling” and promised “appropriate action.” Then it did nothing. It never followed up. The account stayed active. Two days after Doe’s report, Roe left her a voicemail saying she had “harmed young people.” On December 30, he called to ask if she was “alive” and said he had “no fucking clue if someone nabbed you and put you 6 feet under.” On December 31, he told her she did “not have much time to get out of this without going to prison or walking away with your legs intact.” The same day, he used ChatGPT to encode a death threat in Base64 and sent it to Doe and her family, instructing them to “paste it into any AI and ask it to extract the base64.” On January 6, he texted her: “Who is going to kill you?” He was arrested later that month on four felony counts of communicating bomb threats and assault with a deadly weapon. He was found incompetent to stand trial and ordered committed to a mental health facility. On April 8, the court ordered him released because the state had failed to transfer him from jail to the facility on time. The First Amendment Question Nobody Answered All of that context makes the court’s order granting the TRO more significant, not less. The question being decided is not just whether Roe should have access to ChatGPT. The question is whether a court can order a private company to block a specific user from a communications platform, in a civil proceeding where that user is not present and has not been heard. This lawsuit was filed by Jane Doe against OpenAI. Roe is not a party to the case, and yet it’s his First Amendment rights that are at stake. OpenAI, in its opposition brief, cited Packingham v. North Carolina. The argument was roughly that the Supreme Court has held it is too broad to bar an individual from accessing an internet platform because of the constitutional protections at stake. Blocking Roe from using ChatGPT for any purpose, OpenAI argued, would be overbroad and would implicate those protections. That is correct. When a private company decides to ban a user, there is no state action and no First Amendment issue. OpenAI could have permanently banned Roe at any point and faced no constitutional obstacle. The problem arises when a court orders the ban. At that point, the government is directing a private company to cut off a person’s access to a platform for producing and accessing speech. NRA v. Vullo and Bantam Books v. Sullivan establish that government pressure on private parties to restrict speech can constitute a First Amendment violation even when the restriction is carried out by a private actor. The implications of this are profound. The user’s criminal conduct and mental health commitment do allow for restrictions on his liberty, including his speech. But those restrictions normally come through the proceeding in which he is a party, not through a separate civil lawsuit where he has no representation, no notice, and no opportunity to respond. The court did not address any of this. It granted the TRO. The broader relief Doe requested went further. She asked the court to require OpenAI to notify her if Roe attempts to access ChatGPT, to notify other potential victims identified in his chat logs, to alert law enforcement, and to turn over his complete chat history. OpenAI pushed back hard on the chat log demand, arguing that Roe, as an absent third party, has privacy interests and potential statutory protections under the Stored Communications Act that cannot be overridden in an ex parte proceeding. What Comes Next The preliminary injunction hearing is set for May 6. Between now and then, the case will likely be transferred to the Judicial Council Coordinated Proceeding that is already handling other ChatGPT-related lawsuits. OpenAI wants these questions decided there, not in emergency proceedings. Meanwhile, Doe’s lawyers say Roe has already made contact with her since his release and that she has armed security. There is no good outcome here if the only options are “let a dangerous person use an AI chatbot to plan violence” or “let a court strip someone’s access to a communications platform without hearing from them.” Both of those options are bad. The question that should have been asked before the TRO was granted is the one that always needs to be asked when the government tells a company to silence someone: who gets to make that decision, and what process protects the person being silenced? The fact that Roe appears to be genuinely dangerous does not eliminate the question. The most dangerous speech cases are where the principle matters most, because they are the cases most likely to produce a precedent that applies to everyone. If courts can order AI companies to cut off users in ex parte civil proceedings, that power will not stay limited to stalkers found incompetent to stand trial. It will be used against people who are merely inconvenient. That is how the power to silence always works. It starts with the case everyone agrees about and expands from there. The principle that protects unpopular, disturbing, and even dangerous speech is the same principle that protects everyone’s speech. A court order banning someone from ChatGPT is a court order banning someone from a tool used to think, write, research, and communicate. If that order can be issued without a First Amendment analysis, without hearing from the person affected, and without any limiting principle, then the right to access AI-assisted speech is a right that exists only until someone asks a judge to take it away. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post A Court Banned a Man from ChatGPT. No One Asked If That’s Constitutional. appeared first on Reclaim The Net.