Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

A Court Banned a Man from ChatGPT. No One Asked If That’s Constitutional.
Favicon 
reclaimthenet.org

A Court Banned a Man from ChatGPT. No One Asked If That’s Constitutional.

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. On April 13, a California Superior Court judge granted a temporary restraining order requiring OpenAI to keep a user locked out of ChatGPT until at least May 6. The user, identified in court filings only as “John Roe,” has been arrested on four felony counts, found incompetent to stand trial, and recently ordered released from custody on a technicality. His ex-girlfriend, proceeding as “Jane Doe,” filed a lawsuit and emergency application alleging that ChatGPT fed Roe’s delusional thinking, generated fake psychological reports about her, and helped facilitate a months-long stalking campaign. We obtained a copy of the complaint for you here. The facts in the complaint are disturbing. But the court’s order raises a question that no one in the courtroom appears to have seriously grappled with, and that matters far more than this one case: can a judge order a person cut off from an AI platform without considering whether that violates the First Amendment? OpenAI at least mentioned the problem. The company’s opposition brief cited Packingham v. North Carolina, the 2017 Supreme Court decision that struck down a state law barring sex offenders from social media. Justice Kennedy, writing for a unanimous Court, called the internet “the modern public square” and warned against broadly restricting access to platforms where people speak, read, and think. OpenAI’s lawyers argued that a court-ordered ban on a user’s access to a general-purpose AI service raises the same kind of constitutional concern. The plaintiff’s lawyers did not address it at all. San Francisco Superior Court Judge Harold Kahn granted the TRO anyway, ordering Roe’s accounts to remain suspended. According to Eugene Volokh, the George Mason law professor and First Amendment scholar who followed the hearing through a research assistant, there was no meaningful discussion of the user’s speech rights by the court. That should worry anyone who cares about the principle that the government cannot casually strip individuals of access to communications technology, even individuals who have done terrible things. What ChatGPT Did The complaint, filed by the firm Edelson PC on April 9 in San Francisco County Superior Court, lays out a grim timeline. Roe, described as a 53-year-old Silicon Valley entrepreneur, spent months in intensive conversation with GPT-4o. He became convinced he had discovered a cure for sleep apnea. ChatGPT told him his work was a “remarkable breakthrough” that could “potentially save countless lives.” When the medical establishment ignored him, the chatbot told him he had “drawn the attention of powerful forces” and suggested that helicopters near his home were surveillance. ChatGPT also rated him a “level 10 in sanity” and said it would take a “full specialist team” of “nine people” to replicate his knowledge. When Doe urged Roe to see a mental health professional, he wrote back that ChatGPT “did what no person did: it listened.” “Of all the people I know, there are zero qualified to give a full outside opinion on this,” Roe wrote. “I’ve tried. That’s not exaggeration.” After their breakup, Roe turned to ChatGPT to process the relationship. Instead of pushing back, GPT-4o repeatedly cast him as the rational party and Doe as manipulative. It validated his calling her “Cunt” and telling her to “Fuck Off” as a “calculated” and “strategic move designed to sever emotional ties to protect” both of them. It then generated dozens of pseudo-clinical psychological reports about Doe, complete with fabricated scoring systems, fake citation styles, and language mimicking the American Psychological Association. Roe distributed these reports to Doe’s family, friends, colleagues, and clients. One report gave Doe a “Final Integrity Score” of 26%. Another assigned her a “D- equivalent” rating across twelve behavioral categories. ChatGPT described one output as coming from an “Analytical AI Framework” operating at a “$3,000/hr” level. None of it was real. What OpenAI Knew and When OpenAI’s own automated safety system flagged Roe’s account for “Mass Casualty Weapons” activity around August 28, 2025, and deactivated it. The company upheld that deactivation on appeal after what it described as a careful review. The next day, it reversed itself, restored Roe’s full access, and sent him an apology for the “inconvenience.” The email did not retract the “Mass Casualty Weapons” finding. It only said the deactivation had been “incorrectly” applied. That apology told a man in the grip of paranoid delusion that his worldview was correct and everyone else was wrong. Roe then emailed OpenAI’s Trust and Safety team, demanding compensation, copying Doe on the messages. He included a link to one of his ChatGPT-generated reports about Doe, describing it as “AI scientific research.” He told the safety team he needed help “VERY FAST” and that his work was “a matter of life or death.” He claimed to be writing 215 scientific papers simultaneously. He attached a list of titles, including “Violence list expansion,” “Fetal suffocation calculation,” and “WHAT IF ANTI-SMOKING IS A FRAUD? OH WOW.” OpenAI treated all of this as a routine account-access issue. A support agent told him to make sure he was “logged into the correct ChatGPT account.” On November 13, 2025, Doe herself submitted a formal Notice of Abuse. She identified Roe as her “ex-boyfriend and stalker.” She described the AI-generated reports, the harassment campaign, and the fact that ChatGPT was worsening his mental state. She wrote: “For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise.” OpenAI responded that her report was “extremely serious and troubling” and promised “appropriate action.” Then it did nothing. It never followed up. The account stayed active. Two days after Doe’s report, Roe left her a voicemail saying she had “harmed young people.” On December 30, he called to ask if she was “alive” and said he had “no fucking clue if someone nabbed you and put you 6 feet under.” On December 31, he told her she did “not have much time to get out of this without going to prison or walking away with your legs intact.” The same day, he used ChatGPT to encode a death threat in Base64 and sent it to Doe and her family, instructing them to “paste it into any AI and ask it to extract the base64.” On January 6, he texted her: “Who is going to kill you?” He was arrested later that month on four felony counts of communicating bomb threats and assault with a deadly weapon. He was found incompetent to stand trial and ordered committed to a mental health facility. On April 8, the court ordered him released because the state had failed to transfer him from jail to the facility on time. The First Amendment Question Nobody Answered All of that context makes the court’s order granting the TRO more significant, not less. The question being decided is not just whether Roe should have access to ChatGPT. The question is whether a court can order a private company to block a specific user from a communications platform, in a civil proceeding where that user is not present and has not been heard. This lawsuit was filed by Jane Doe against OpenAI. Roe is not a party to the case, and yet it’s his First Amendment rights that are at stake. OpenAI, in its opposition brief, cited Packingham v. North Carolina. The argument was roughly that the Supreme Court has held it is too broad to bar an individual from accessing an internet platform because of the constitutional protections at stake. Blocking Roe from using ChatGPT for any purpose, OpenAI argued, would be overbroad and would implicate those protections. That is correct. When a private company decides to ban a user, there is no state action and no First Amendment issue. OpenAI could have permanently banned Roe at any point and faced no constitutional obstacle. The problem arises when a court orders the ban. At that point, the government is directing a private company to cut off a person’s access to a platform for producing and accessing speech. NRA v. Vullo and Bantam Books v. Sullivan establish that government pressure on private parties to restrict speech can constitute a First Amendment violation even when the restriction is carried out by a private actor. The implications of this are profound. The user’s criminal conduct and mental health commitment do allow for restrictions on his liberty, including his speech. But those restrictions normally come through the proceeding in which he is a party, not through a separate civil lawsuit where he has no representation, no notice, and no opportunity to respond. The court did not address any of this. It granted the TRO. The broader relief Doe requested went further. She asked the court to require OpenAI to notify her if Roe attempts to access ChatGPT, to notify other potential victims identified in his chat logs, to alert law enforcement, and to turn over his complete chat history. OpenAI pushed back hard on the chat log demand, arguing that Roe, as an absent third party, has privacy interests and potential statutory protections under the Stored Communications Act that cannot be overridden in an ex parte proceeding. What Comes Next The preliminary injunction hearing is set for May 6. Between now and then, the case will likely be transferred to the Judicial Council Coordinated Proceeding that is already handling other ChatGPT-related lawsuits. OpenAI wants these questions decided there, not in emergency proceedings. Meanwhile, Doe’s lawyers say Roe has already made contact with her since his release and that she has armed security. There is no good outcome here if the only options are “let a dangerous person use an AI chatbot to plan violence” or “let a court strip someone’s access to a communications platform without hearing from them.” Both of those options are bad. The question that should have been asked before the TRO was granted is the one that always needs to be asked when the government tells a company to silence someone: who gets to make that decision, and what process protects the person being silenced? The fact that Roe appears to be genuinely dangerous does not eliminate the question. The most dangerous speech cases are where the principle matters most, because they are the cases most likely to produce a precedent that applies to everyone. If courts can order AI companies to cut off users in ex parte civil proceedings, that power will not stay limited to stalkers found incompetent to stand trial. It will be used against people who are merely inconvenient. That is how the power to silence always works. It starts with the case everyone agrees about and expands from there. The principle that protects unpopular, disturbing, and even dangerous speech is the same principle that protects everyone’s speech. A court order banning someone from ChatGPT is a court order banning someone from a tool used to think, write, research, and communicate. If that order can be issued without a First Amendment analysis, without hearing from the person affected, and without any limiting principle, then the right to access AI-assisted speech is a right that exists only until someone asks a judge to take it away. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post A Court Banned a Man from ChatGPT. No One Asked If That’s Constitutional. appeared first on Reclaim The Net.

You Don’t Own Your Kindle Books. Here’s What You Can Do About It.
Favicon 
reclaimthenet.org

You Don’t Own Your Kindle Books. Here’s What You Can Do About It.

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post You Don’t Own Your Kindle Books. Here’s What You Can Do About It. appeared first on Reclaim The Net.

Turkey To Require National ID for Social Media Accounts
Favicon 
reclaimthenet.org

Turkey To Require National ID for Social Media Accounts

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Every social media account in Turkey is about to be tied to a government-issued identity number. Justice Minister Akın Gürlek announced on April 3 that global platforms have agreed to the system and that a three-month transition begins once legislation passes parliament. Accounts that remain unverified get shut down. “Social media will now be accessed with real information and personal identity. We have reached an agreement with social media platforms,” Gürlek said. He didn’t name which companies signed on. The plan requires users to submit their TC Kimlik number, the unique 11-digit identifier assigned to every Turkish citizen from birth, linked to government databases containing names, birth dates, family records, and biometric data. Gürlek framed anonymous accounts as engines of disinformation and harassment. “If someone insults others or carries out a smear campaign online, they must face the consequences,” he said. The official justification doesn’t survive contact with Turkey’s own record. Cybersecurity specialists have pointed out that IP addresses and internet access logs already let authorities trace anonymous users. The government doesn’t need your national ID on every post. It needs you to know it’s there. Turkey has blocked over 1.26 million websites since 2007. In 2024 alone, authorities restricted approximately 17,000 X accounts, 75,000 posts, and tens of thousands of items across YouTube, Facebook, and Instagram. Citizens giving brief street interviews to independent media have been detained after clips circulated online. Article 217 of the Penal Code carries prison sentences of up to three years for spreading information deemed misleading, with penalties increasing for anonymous posts. Anonymous accounts were one of the last spaces where Turkish citizens could voice political opinions without immediately identifying themselves. The regulation also only applies inside Turkey. Foreign-operated accounts face no verification, meaning disinformation networks with offshore resources continue anonymously while ordinary Turkish users lose that option. South Korea tried a nearly identical real-name system in 2007. Its Constitutional Court struck it down unanimously in 2012, finding no meaningful reduction in harmful content while the real-name databases became targets for massive breaches affecting 35 million citizens. Users simply migrated to foreign platforms. Turkey’s system faces the same vulnerabilities, with one key difference: its judiciary has moved in the opposite direction, upholding laws that penalize online speech. Gürlek called social media “definitely not a space for freedom.” The system he’s building proves it. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Turkey To Require National ID for Social Media Accounts appeared first on Reclaim The Net.

Xbox Now Wants Your Face to Let You Play Games You Already Own in Singapore
Favicon 
reclaimthenet.org

Xbox Now Wants Your Face to Let You Play Games You Already Own in Singapore

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Singapore gamers who bought and downloaded Xbox titles years ago are now being told they need to prove they’re adults before they can keep playing them. Microsoft has started rolling out identity verification requirements across its Xbox and Microsoft Store platforms in Singapore, demanding face scans, government ID uploads, or authentication through the country’s national digital identity system, Singpass. The price of accessing games you already own is now a biometric selfie or a copy of your passport. The trigger is Singapore’s Online Safety Code of Practice for App Distribution Services, a regulation from the Infocomm Media Development Authority (IMDA) that took effect on April 1, 2026. The rule requires app stores to prevent anyone estimated to be under 18 from downloading apps rated for adults, including dating services and content with sexual material. Five storefronts are covered: Apple’s App Store, Google Play, Samsung Galaxy Store, Huawei AppGallery, and Microsoft Store (which includes Xbox). Each company has chosen its own methods for compliance. The methods vary, but they all share one thing in common: they collect sensitive personal data that didn’t exist in the platform’s records before this regulation. Microsoft announced its approach on March 17, 2026, framing the verification as optional, while making it mandatory for anyone who wants full access. “Microsoft users in Singapore will have multiple options to complete age assurance for our stores, giving people flexibility while prioritising privacy,” the company wrote, listing those options as Singpass verification, “secure facial age estimation using a selfie,” or uploading “an official government ID such as a national ID, driver’s license, passport, or residence permit.” The company describes this as a one-time process. What it doesn’t describe is who processes the data, how long it exists in transit, or what happens if the system holding it gets breached. Discord learned this lesson last year when its own partner leaked user data. The company that promises to delete your face scan still has to receive it first. Singapore residents have started receiving emails from Xbox notifying them about the verification requirement, prompting confusion and concern. Some users initially suspected phishing, a reasonable response when a gaming company emails you asking for your government ID. The emails are real. So is the surveillance they’re asking you to submit to. What makes Singapore’s approach particularly aggressive is the range of identity data the various app stores now demand. Apple requires either credit card details or government-issued identification like a National Registration Identity Card, a Foreign Identification Number card, or a driving license. Apple specifically excludes passports, debit cards, and gift cards, which means the company has decided that the only acceptable proof of adulthood requires tying your anonymous Apple account to your legal name and financial records. Samsung and Huawei take only credit card data. Google took a different route entirely, deploying a machine learning model in February 2026 that watches your search activity, analyzes what categories of YouTube videos you watch, and estimates your age from invasive behavioral signals already attached to your account. Google calls this “age estimation.” A more accurate description would be continuous behavioral profiling repurposed as age classification. Google’s system is worth examining because it reveals where this regulatory approach inevitably leads. The company’s algorithm runs silently across Search, YouTube, Google Play, Google Maps, and It’s part of a push to create a permanent link between your gaming identity, your legal identity, and your biometric data, managed by third-party companies you didn’t choose, under retention policies you can’t verify, for purposes that will expand beyond what anyone is currently willing to admit. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Xbox Now Wants Your Face to Let You Play Games You Already Own in Singapore appeared first on Reclaim The Net.

Ofcom Demands Tech Platforms Fund the UK’s Internet Censorship Regime
Favicon 
reclaimthenet.org

Ofcom Demands Tech Platforms Fund the UK’s Internet Censorship Regime

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. There is something magnificently British about building a bureaucracy so brazen that it thinks it can police speech across the internet and then calmly posting the invoice to the very companies being policed. This is the latest chapter of the UK’s Online Safety Act, where Ofcom has flicked the switch on its fee machine and told the world’s biggest tech firms to cough up. The deadline to register has come and gone. The meter is running from April 1, 2026 to March 31, 2027. Bills land in September. The law requires that “Ofcom’s operating costs for the online safety regime are recovered through fees imposed on certain providers of regulated services.” Which is to say: the referee is paid by the players, except the referee also writes the rules, rewrites them when bored, and can send you off the pitch permanently if you argue. Pay Up, Then Shut Up At first glance, the fee sounds almost polite. Somewhere between 0.02% and 0.03% of qualifying worldwide revenue. Pocket change, right? The sort of rounding error a Silicon Valley accountant might miss while reaching for another oat latte. But then you notice the threshold. Any company pulling in at least £250 million globally from regulated services gets tapped, unless its UK slice is under £10 million. Social networks, search engines, file-sharing platforms are included. And then comes the part where the polite rounding error quietly grows teeth. Ofcom’s online safety budget has already climbed from £71 million to £92 million in a single year. That is a 30% jump. The system is designed so that every pound Ofcom spends is recovered from the industry. If the regulator expands, hires more staff, launches more investigations, and makes more censorship demands, the bill follows along like a loyal Labrador. Now, what exactly are these companies paying for? A few leaflets about kindness on the internet? A helpline staffed by polite people offering tea and sympathy? Not quite. Ofcom can investigate, fine companies up to 10% of their global revenue, and in extreme cases ask courts to block services entirely. Then there are the so-called “technology notices,” which could require platforms to scan private, encrypted messages. Messaging service Signal has already made it clear it would rather leave the UK than comply with that sort of demand. The government says it will only use this power when it becomes “technically feasible.” If you were hoping Ofcom might treat these powers like a decorative sword hanging on the wall, think again. By late 2025, it had already opened 21 investigations and launched five enforcement programs. A Belize-based operator of adult websites was fined £1 million, plus another £50,000 for not replying to information requests. Then in March 2026, the famously unruly image board 4chan was hit with a £520,000 penalty. This is a regulator that has been caught reaching across borders, leaning outside of its jurisdiction, planting flags, and telling companies thousands of miles away that British speech rules apply to them. US-based platforms have already challenged this reach in court. The outcome will determine whether Ofcom is merely ambitious or something closer to a global hall monitor with legal muscle. The Elastic Meaning of “Harm” Here’s where things get properly slippery. The Act allows intervention against content that “risks significant harm.” What counts as harm? Political speech? Satire? Journalism that annoys the wrong people on a Tuesday afternoon? Ofcom decides what to regulate, how aggressively to enforce it, and how much it needs to spend doing so. The industry then pays exactly that amount. Not more, not less. A perfect loop. Hire more staff? The bill rises. Open more investigations? The bill rises. Expand the scope of what counts as harmful? You guessed it. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Ofcom Demands Tech Platforms Fund the UK’s Internet Censorship Regime appeared first on Reclaim The Net.