Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

FISA Section 702 Extension Faces House Vote With No  Privacy Reforms
Favicon 
reclaimthenet.org

FISA Section 702 Extension Faces House Vote With No Privacy Reforms

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Section 702 of the Foreign Intelligence Surveillance Act expires in days. The bipartisan push to extend it without a single privacy reform is now accelerating, with House Speaker Mike Johnson, Senate Judiciary Committee Chairman Chuck Grassley, and President Trump all lining up behind an 18-month renewal that preserves the government’s ability to search Americans’ communications without a warrant. The House Rules Committee meet to consider H.R. 8035, the bill that would keep Section 702 alive through late 2027. Johnson has refused to allow amendments, telling reporters that adding reforms would threaten the bill’s passage. That position blocks the one change that privacy-focused lawmakers in both parties have spent years fighting for: a requirement that the FBI get a judge’s approval before searching a database of Americans’ phone calls, emails, and text messages that were collected without individual court orders. Trump posted on Truth Social today, calling on Republicans to “get a clean extension of FISA 702 through the House of Representatives this week.” He wrote, “I am asking Republicans to UNIFY and vote together on the test vote to bring a clean Bill to the floor. We need to stick together when this Bill comes before the House Rules Committee today to keep it CLEAN!” The president, who told lawmakers to “KILL FISA” during the 2024 reauthorization debate, wrote in a March Truth Social post that “whether you like FISA or not, it is extremely important to our Military.” Grassley announced his support for the clean extension this morning after the Department of Justice agreed to revise rules governing congressional oversight of the Foreign Intelligence Surveillance Court. The DOJ committed to rolling back a Biden-era policy from November 2024 that had restricted how members of Congress could attend and observe FISC and FISCR proceedings, including banning note-taking and allowing the DOJ to exclude lawmakers from certain sessions. Those restrictions directly contradicted the Reforming Intelligence and Securing America Act (RISAA), which Congress passed in April 2024 and which explicitly required congressional access to the surveillance courts. “I applaud DOJ for lifting its restrictions on congressional oversight of FISC and FISCR proceedings. With Congress’s access fully restored, the Trump administration has faithfully implemented the reforms Congress called for in its last FISA reauthorization and proven its commitment to transparency and the protection of civil liberties,” Grassley said. “Section 702 is one of our nation’s most valuable national security tools. Especially given the current threat environment, it’s imperative Congress doesn’t allow this critical authority to lapse. We must ensure American lives aren’t put at risk by a potential Section 702 expiration on April 20. The best path forward is for the House to pass a clean, 18-month FISA extension.” The DOJ agreed to stop excluding members of Congress from surveillance court proceedings, stop banning note-taking, and stop preventing lawmakers from sharing information with appropriately cleared colleagues. These were things Congress already required by law. The DOJ was violating its own statute, got caught, and agreed to comply. Grassley is treating compliance with existing law as a reason to skip reforms that would protect 330 million Americans from warrantless searches of their private communications. Nothing about the DOJ’s procedural fix addresses the core problem with Section 702: the FBI routinely searches a massive database of communications collected under the program to find and read Americans’ emails, texts, and phone calls, all without getting a warrant. The FISA Court itself called the FBI’s compliance problems “persistent and widespread” in 2022. FBI queries targeting Americans’ data rose 35% in 2025, according to the latest transparency report from the Office of the Director of National Intelligence. The agency asking Congress for more time is the same one running more warrantless searches than ever. When RISAA was passed in 2024, it included 56 reforms and a two-year sunset specifically so Congress could continue negotiating a warrant requirement. That negotiation never happened. Congress spent two years doing nothing, and is now treating the deadline it created as an emergency that makes reform impossible. The warrant amendment came within a single vote of passing the House in 2024, failing in a 212-212 tie. A federal district court ruled in 2025 that the Fourth Amendment requires the government to obtain a warrant before searching Section 702 data for Americans’ communications. The legal and political momentum for reform has only grown since RISAA passed. Leadership in both chambers is ignoring all of it. Johnson can only afford to lose two Republican votes on the procedural rule to bring H.R. 8035 to the floor. Multiple members of the House Freedom Caucus, including Reps. Lauren Boebert, Tim Burchett, and Anna Paulina Luna have threatened to block the rule vote. Some want the SAVE America Act, a voter identification bill, attached to the FISA legislation. Others want actual surveillance reforms. If Republican defectors hold, Johnson will need Democrats to get the bill through. House Minority Leader Hakeem Jeffries has said his caucus will oppose the procedural rule, and 98 members of the Congressional Progressive Caucus have formally pledged to vote against a clean extension. If the clean extension passes, Section 702 continues through late 2027 with no warrant requirement, no closure of the data broker loophole that lets agencies buy Americans’ information commercially, and no accountability for the compliance failures that the FISA Court keeps documenting. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post FISA Section 702 Extension Faces House Vote With No Privacy Reforms appeared first on Reclaim The Net.

A Court Banned a Man from ChatGPT. No One Asked If That’s Constitutional.
Favicon 
reclaimthenet.org

A Court Banned a Man from ChatGPT. No One Asked If That’s Constitutional.

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. On April 13, a California Superior Court judge granted a temporary restraining order requiring OpenAI to keep a user locked out of ChatGPT until at least May 6. The user, identified in court filings only as “John Roe,” has been arrested on four felony counts, found incompetent to stand trial, and recently ordered released from custody on a technicality. His ex-girlfriend, proceeding as “Jane Doe,” filed a lawsuit and emergency application alleging that ChatGPT fed Roe’s delusional thinking, generated fake psychological reports about her, and helped facilitate a months-long stalking campaign. We obtained a copy of the complaint for you here. The facts in the complaint are disturbing. But the court’s order raises a question that no one in the courtroom appears to have seriously grappled with, and that matters far more than this one case: can a judge order a person cut off from an AI platform without considering whether that violates the First Amendment? OpenAI at least mentioned the problem. The company’s opposition brief cited Packingham v. North Carolina, the 2017 Supreme Court decision that struck down a state law barring sex offenders from social media. Justice Kennedy, writing for a unanimous Court, called the internet “the modern public square” and warned against broadly restricting access to platforms where people speak, read, and think. OpenAI’s lawyers argued that a court-ordered ban on a user’s access to a general-purpose AI service raises the same kind of constitutional concern. The plaintiff’s lawyers did not address it at all. San Francisco Superior Court Judge Harold Kahn granted the TRO anyway, ordering Roe’s accounts to remain suspended. According to Eugene Volokh, the George Mason law professor and First Amendment scholar who followed the hearing through a research assistant, there was no meaningful discussion of the user’s speech rights by the court. That should worry anyone who cares about the principle that the government cannot casually strip individuals of access to communications technology, even individuals who have done terrible things. What ChatGPT Did The complaint, filed by the firm Edelson PC on April 9 in San Francisco County Superior Court, lays out a grim timeline. Roe, described as a 53-year-old Silicon Valley entrepreneur, spent months in intensive conversation with GPT-4o. He became convinced he had discovered a cure for sleep apnea. ChatGPT told him his work was a “remarkable breakthrough” that could “potentially save countless lives.” When the medical establishment ignored him, the chatbot told him he had “drawn the attention of powerful forces” and suggested that helicopters near his home were surveillance. ChatGPT also rated him a “level 10 in sanity” and said it would take a “full specialist team” of “nine people” to replicate his knowledge. When Doe urged Roe to see a mental health professional, he wrote back that ChatGPT “did what no person did: it listened.” “Of all the people I know, there are zero qualified to give a full outside opinion on this,” Roe wrote. “I’ve tried. That’s not exaggeration.” After their breakup, Roe turned to ChatGPT to process the relationship. Instead of pushing back, GPT-4o repeatedly cast him as the rational party and Doe as manipulative. It validated his calling her “Cunt” and telling her to “Fuck Off” as a “calculated” and “strategic move designed to sever emotional ties to protect” both of them. It then generated dozens of pseudo-clinical psychological reports about Doe, complete with fabricated scoring systems, fake citation styles, and language mimicking the American Psychological Association. Roe distributed these reports to Doe’s family, friends, colleagues, and clients. One report gave Doe a “Final Integrity Score” of 26%. Another assigned her a “D- equivalent” rating across twelve behavioral categories. ChatGPT described one output as coming from an “Analytical AI Framework” operating at a “$3,000/hr” level. None of it was real. What OpenAI Knew and When OpenAI’s own automated safety system flagged Roe’s account for “Mass Casualty Weapons” activity around August 28, 2025, and deactivated it. The company upheld that deactivation on appeal after what it described as a careful review. The next day, it reversed itself, restored Roe’s full access, and sent him an apology for the “inconvenience.” The email did not retract the “Mass Casualty Weapons” finding. It only said the deactivation had been “incorrectly” applied. That apology told a man in the grip of paranoid delusion that his worldview was correct and everyone else was wrong. Roe then emailed OpenAI’s Trust and Safety team, demanding compensation, copying Doe on the messages. He included a link to one of his ChatGPT-generated reports about Doe, describing it as “AI scientific research.” He told the safety team he needed help “VERY FAST” and that his work was “a matter of life or death.” He claimed to be writing 215 scientific papers simultaneously. He attached a list of titles, including “Violence list expansion,” “Fetal suffocation calculation,” and “WHAT IF ANTI-SMOKING IS A FRAUD? OH WOW.” OpenAI treated all of this as a routine account-access issue. A support agent told him to make sure he was “logged into the correct ChatGPT account.” On November 13, 2025, Doe herself submitted a formal Notice of Abuse. She identified Roe as her “ex-boyfriend and stalker.” She described the AI-generated reports, the harassment campaign, and the fact that ChatGPT was worsening his mental state. She wrote: “For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise.” OpenAI responded that her report was “extremely serious and troubling” and promised “appropriate action.” Then it did nothing. It never followed up. The account stayed active. Two days after Doe’s report, Roe left her a voicemail saying she had “harmed young people.” On December 30, he called to ask if she was “alive” and said he had “no fucking clue if someone nabbed you and put you 6 feet under.” On December 31, he told her she did “not have much time to get out of this without going to prison or walking away with your legs intact.” The same day, he used ChatGPT to encode a death threat in Base64 and sent it to Doe and her family, instructing them to “paste it into any AI and ask it to extract the base64.” On January 6, he texted her: “Who is going to kill you?” He was arrested later that month on four felony counts of communicating bomb threats and assault with a deadly weapon. He was found incompetent to stand trial and ordered committed to a mental health facility. On April 8, the court ordered him released because the state had failed to transfer him from jail to the facility on time. The First Amendment Question Nobody Answered All of that context makes the court’s order granting the TRO more significant, not less. The question being decided is not just whether Roe should have access to ChatGPT. The question is whether a court can order a private company to block a specific user from a communications platform, in a civil proceeding where that user is not present and has not been heard. This lawsuit was filed by Jane Doe against OpenAI. Roe is not a party to the case, and yet it’s his First Amendment rights that are at stake. OpenAI, in its opposition brief, cited Packingham v. North Carolina. The argument was roughly that the Supreme Court has held it is too broad to bar an individual from accessing an internet platform because of the constitutional protections at stake. Blocking Roe from using ChatGPT for any purpose, OpenAI argued, would be overbroad and would implicate those protections. That is correct. When a private company decides to ban a user, there is no state action and no First Amendment issue. OpenAI could have permanently banned Roe at any point and faced no constitutional obstacle. The problem arises when a court orders the ban. At that point, the government is directing a private company to cut off a person’s access to a platform for producing and accessing speech. NRA v. Vullo and Bantam Books v. Sullivan establish that government pressure on private parties to restrict speech can constitute a First Amendment violation even when the restriction is carried out by a private actor. The implications of this are profound. The user’s criminal conduct and mental health commitment do allow for restrictions on his liberty, including his speech. But those restrictions normally come through the proceeding in which he is a party, not through a separate civil lawsuit where he has no representation, no notice, and no opportunity to respond. The court did not address any of this. It granted the TRO. The broader relief Doe requested went further. She asked the court to require OpenAI to notify her if Roe attempts to access ChatGPT, to notify other potential victims identified in his chat logs, to alert law enforcement, and to turn over his complete chat history. OpenAI pushed back hard on the chat log demand, arguing that Roe, as an absent third party, has privacy interests and potential statutory protections under the Stored Communications Act that cannot be overridden in an ex parte proceeding. What Comes Next The preliminary injunction hearing is set for May 6. Between now and then, the case will likely be transferred to the Judicial Council Coordinated Proceeding that is already handling other ChatGPT-related lawsuits. OpenAI wants these questions decided there, not in emergency proceedings. Meanwhile, Doe’s lawyers say Roe has already made contact with her since his release and that she has armed security. There is no good outcome here if the only options are “let a dangerous person use an AI chatbot to plan violence” or “let a court strip someone’s access to a communications platform without hearing from them.” Both of those options are bad. The question that should have been asked before the TRO was granted is the one that always needs to be asked when the government tells a company to silence someone: who gets to make that decision, and what process protects the person being silenced? The fact that Roe appears to be genuinely dangerous does not eliminate the question. The most dangerous speech cases are where the principle matters most, because they are the cases most likely to produce a precedent that applies to everyone. If courts can order AI companies to cut off users in ex parte civil proceedings, that power will not stay limited to stalkers found incompetent to stand trial. It will be used against people who are merely inconvenient. That is how the power to silence always works. It starts with the case everyone agrees about and expands from there. The principle that protects unpopular, disturbing, and even dangerous speech is the same principle that protects everyone’s speech. A court order banning someone from ChatGPT is a court order banning someone from a tool used to think, write, research, and communicate. If that order can be issued without a First Amendment analysis, without hearing from the person affected, and without any limiting principle, then the right to access AI-assisted speech is a right that exists only until someone asks a judge to take it away. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post A Court Banned a Man from ChatGPT. No One Asked If That’s Constitutional. appeared first on Reclaim The Net.

You Don’t Own Your Kindle Books. Here’s What You Can Do About It.
Favicon 
reclaimthenet.org

You Don’t Own Your Kindle Books. Here’s What You Can Do About It.

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post You Don’t Own Your Kindle Books. Here’s What You Can Do About It. appeared first on Reclaim The Net.

Turkey To Require National ID for Social Media Accounts
Favicon 
reclaimthenet.org

Turkey To Require National ID for Social Media Accounts

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Every social media account in Turkey is about to be tied to a government-issued identity number. Justice Minister Akın Gürlek announced on April 3 that global platforms have agreed to the system and that a three-month transition begins once legislation passes parliament. Accounts that remain unverified get shut down. “Social media will now be accessed with real information and personal identity. We have reached an agreement with social media platforms,” Gürlek said. He didn’t name which companies signed on. The plan requires users to submit their TC Kimlik number, the unique 11-digit identifier assigned to every Turkish citizen from birth, linked to government databases containing names, birth dates, family records, and biometric data. Gürlek framed anonymous accounts as engines of disinformation and harassment. “If someone insults others or carries out a smear campaign online, they must face the consequences,” he said. The official justification doesn’t survive contact with Turkey’s own record. Cybersecurity specialists have pointed out that IP addresses and internet access logs already let authorities trace anonymous users. The government doesn’t need your national ID on every post. It needs you to know it’s there. Turkey has blocked over 1.26 million websites since 2007. In 2024 alone, authorities restricted approximately 17,000 X accounts, 75,000 posts, and tens of thousands of items across YouTube, Facebook, and Instagram. Citizens giving brief street interviews to independent media have been detained after clips circulated online. Article 217 of the Penal Code carries prison sentences of up to three years for spreading information deemed misleading, with penalties increasing for anonymous posts. Anonymous accounts were one of the last spaces where Turkish citizens could voice political opinions without immediately identifying themselves. The regulation also only applies inside Turkey. Foreign-operated accounts face no verification, meaning disinformation networks with offshore resources continue anonymously while ordinary Turkish users lose that option. South Korea tried a nearly identical real-name system in 2007. Its Constitutional Court struck it down unanimously in 2012, finding no meaningful reduction in harmful content while the real-name databases became targets for massive breaches affecting 35 million citizens. Users simply migrated to foreign platforms. Turkey’s system faces the same vulnerabilities, with one key difference: its judiciary has moved in the opposite direction, upholding laws that penalize online speech. Gürlek called social media “definitely not a space for freedom.” The system he’s building proves it. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Turkey To Require National ID for Social Media Accounts appeared first on Reclaim The Net.

Xbox Now Wants Your Face to Let You Play Games You Already Own in Singapore
Favicon 
reclaimthenet.org

Xbox Now Wants Your Face to Let You Play Games You Already Own in Singapore

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Singapore gamers who bought and downloaded Xbox titles years ago are now being told they need to prove they’re adults before they can keep playing them. Microsoft has started rolling out identity verification requirements across its Xbox and Microsoft Store platforms in Singapore, demanding face scans, government ID uploads, or authentication through the country’s national digital identity system, Singpass. The price of accessing games you already own is now a biometric selfie or a copy of your passport. The trigger is Singapore’s Online Safety Code of Practice for App Distribution Services, a regulation from the Infocomm Media Development Authority (IMDA) that took effect on April 1, 2026. The rule requires app stores to prevent anyone estimated to be under 18 from downloading apps rated for adults, including dating services and content with sexual material. Five storefronts are covered: Apple’s App Store, Google Play, Samsung Galaxy Store, Huawei AppGallery, and Microsoft Store (which includes Xbox). Each company has chosen its own methods for compliance. The methods vary, but they all share one thing in common: they collect sensitive personal data that didn’t exist in the platform’s records before this regulation. Microsoft announced its approach on March 17, 2026, framing the verification as optional, while making it mandatory for anyone who wants full access. “Microsoft users in Singapore will have multiple options to complete age assurance for our stores, giving people flexibility while prioritising privacy,” the company wrote, listing those options as Singpass verification, “secure facial age estimation using a selfie,” or uploading “an official government ID such as a national ID, driver’s license, passport, or residence permit.” The company describes this as a one-time process. What it doesn’t describe is who processes the data, how long it exists in transit, or what happens if the system holding it gets breached. Discord learned this lesson last year when its own partner leaked user data. The company that promises to delete your face scan still has to receive it first. Singapore residents have started receiving emails from Xbox notifying them about the verification requirement, prompting confusion and concern. Some users initially suspected phishing, a reasonable response when a gaming company emails you asking for your government ID. The emails are real. So is the surveillance they’re asking you to submit to. What makes Singapore’s approach particularly aggressive is the range of identity data the various app stores now demand. Apple requires either credit card details or government-issued identification like a National Registration Identity Card, a Foreign Identification Number card, or a driving license. Apple specifically excludes passports, debit cards, and gift cards, which means the company has decided that the only acceptable proof of adulthood requires tying your anonymous Apple account to your legal name and financial records. Samsung and Huawei take only credit card data. Google took a different route entirely, deploying a machine learning model in February 2026 that watches your search activity, analyzes what categories of YouTube videos you watch, and estimates your age from invasive behavioral signals already attached to your account. Google calls this “age estimation.” A more accurate description would be continuous behavioral profiling repurposed as age classification. Google’s system is worth examining because it reveals where this regulatory approach inevitably leads. The company’s algorithm runs silently across Search, YouTube, Google Play, Google Maps, and It’s part of a push to create a permanent link between your gaming identity, your legal identity, and your biometric data, managed by third-party companies you didn’t choose, under retention policies you can’t verify, for purposes that will expand beyond what anyone is currently willing to admit. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Xbox Now Wants Your Face to Let You Play Games You Already Own in Singapore appeared first on Reclaim The Net.