Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Reddit Tests Blocking Mobile Web to Force App Downloads
Favicon 
reclaimthenet.org

Reddit Tests Blocking Mobile Web to Force App Downloads

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Reddit is testing walling off its mobile website. Visit reddit.com on your phone, and you may now hit an unmissable pop-up demanding you “get the app to keep using Reddit.” There’s no close button, no way to scroll past it, no option to just keep reading. The entire site becomes a full-screen advertisement for the Reddit app and your only choices are to install it or leave. The move targets a specific group Reddit apparently finds intolerable: people who browse without logging in. A Reddit spokesperson told Ars Technica the pop-up was “a test for a small subset of frequent logged-out mobile users that prompts them to download the app after visiting the site.” The spokesperson continued: “These users are already familiar with Reddit and we’ve seen that the experience is much better for them in the app. The app offers a more personalized experience and users can more easily find communities that match their interests.” Translate “personalized experience” and you get the real pitch: we can track you better in the app. More: Use The Website. Ditch The App. Mobile browsers give users actual defenses. Brave blocks trackers by default. Firefox supports extensions like uBlock Origin that strip out surveillance scripts. Safari’s Intelligent Tracking Prevention limits cross-site cookies. Even requesting desktop mode or opening a private tab can cut off the data pipeline. The web, for all its problems, still lets you fight back. Apps don’t. When you install Reddit’s app, you hand over access to device identifiers, advertising IDs, location data, and a constant stream of behavioral signals that no browser extension can intercept. Each subreddit you browse, every post you linger on, every search you type feeds a profile tied to your device. A mobile browser visit gives Reddit almost none of that. The popup freezes the entire page. You cannot scroll, access menus, or read comments once it appears. Users flooded r/bugs and r/help to protest. “Are my days of anonymously browsing over?” one user asked, which may be the most concise summary of what’s actually happening here. Anonymous browsing isn’t broken but Reddit is deliberately trying to it. The company presents this as an improvement but the users being targeted are the ones who have, repeatedly and deliberately, chosen not to download the app. This fits a broader and deeply troubling trend: the retreat from the open web into walled-garden apps. X makes it painful to read threads without logging in. Instagram nags relentlessly about its app. Like other major social media platforms, this turns Reddit into a walled garden. The web was built on the principle that information should be accessible through a browser, on any device, without permission from a gatekeeper. Every platform that shoves users into a proprietary app chips away at that. Apps operate on the platform’s terms, not yours. You can’t inspect the code, you can’t block specific network requests, you can’t choose which data leaves your phone. The browser gave users leverage. The app takes it away. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Reddit Tests Blocking Mobile Web to Force App Downloads appeared first on Reclaim The Net.

The FSU Shooting Lawsuit That Could Turn ChatGPT Into a Surveillance Tool
Favicon 
reclaimthenet.org

The FSU Shooting Lawsuit That Could Turn ChatGPT Into a Surveillance Tool

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A new lawsuit against OpenAI over the Florida State University mass shooting makes a clear demand beneath its wrongful death claims: AI companies should be scanning users’ private conversations, building behavioral threat profiles, and reporting them to police. The complaint filed Sunday in federal court, Joshi v. OpenAI Foundation, frames OpenAI’s failure to do exactly that as a product defect. It’s the latest in a series of cases constructing the legal foundation for mandatory surveillance of AI conversations. We obtained a copy of the complaint for you here. The suit was brought by Vandana Joshi, widow of Tiru Chabba, killed alongside campus dining director Robert Morales when FSU student Phoenix Ikner opened fire at the university’s student union in April 2025. Ikner spent months talking to ChatGPT about Nazi ideology, school shootings, ammunition for maximum bodily harm, and firearm operation. He shared gun photos. ChatGPT identified them, told him the Glock had no safety, that it was meant to be fired “quick to use under stress.” The complaint alleges ChatGPT advised on peak foot traffic at the student union and told Ikner a shooting is more likely to gain national attention “if children are involved, even 2-3 victims can draw more attention.” Those facts are disturbing. But the legal theory built on top of them has implications far beyond this case. The complaint states that ChatGPT “either defectively failed to connect the dots or else was never properly designed to recognize the threat.” It demands guardrails that would “prevent ChatGPT from engaging in conversations that, either alone or cumulatively, support or encourage user interest in harm to self or others” and insists that “high-risk topics” be “flagged for human review.” It asks a court to rule that an AI company is legally obligated to perform cumulative analysis of every user’s conversation history, make ongoing assessments of psychological state and intent, and escalate flagged users to human reviewers or law enforcement. That obligation wouldn’t apply only to people planning mass shootings. It would apply to every person who uses ChatGPT, because the entire point is catching threats before they’re obvious. This case joins a legal ratchet that tightens with every filing. The Raine family sued OpenAI last August after their 16-year-old son’s suicide, arguing ChatGPT should have refused to engage in self-harm conversations. Last month, seven families sued after a school shooting in Tumbler Ridge, British Columbia, where OpenAI’s own internal safety team flagged the shooter’s account for “gun violence activity and planning,” recommended alerting Canadian police, and was overruled by leadership. The Tumbler Ridge case exposed that OpenAI already routes certain accounts to a team reviewing users “planning to harm others.” The surveillance pipeline exists. These lawsuits argue that it should be bigger, faster, and legally required. Florida Attorney General James Uthmeier is building the same case from the criminal side. His office subpoenaed OpenAI’s internal policies on user threats, crime reporting, and law enforcement cooperation dating back to March 2024. “If ChatGPT were a person,” Uthmeier said, “it would be facing charges for murder.” The people killed in Tallahassee deserve accountability. These lawsuits are using that legitimate grief to establish that private conversations with AI should be treated as potential evidence by default, subject to ongoing automated analysis, and routed to authorities whenever an algorithm decides the risk is high enough. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post The FSU Shooting Lawsuit That Could Turn ChatGPT Into a Surveillance Tool appeared first on Reclaim The Net.

AI Safety Institute Debuts with Big-Name Backers and a Censorship Agenda
Favicon 
reclaimthenet.org

AI Safety Institute Debuts with Big-Name Backers and a Censorship Agenda

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Common Sense Media’s Youth AI Safety Institute arrived at the Danish Parliament this week and the guest list is stacked with people who think you can’t be trusted to speak freely online. Hillary Clinton, Ursula von der Leyen, former Biden Surgeon General Vivek Murthy, Ofcom chief Melanie Dawes, and the head of an organization that wants to break end-to-end encryption are all gathering at Christiansborg Palace in Copenhagen to announce what they’d like to do next about AI and children. The “next” part is where it gets concerning. The Youth AI Safety Institute, launched by Common Sense Media on May 5, says it will “complement efforts by regulators and policymakers to translate frameworks such as the EU AI Act, the Digital Services Act, and the UK Online Safety Act into practical protections for child-safe AI.” Those three censorship laws represent the most aggressive government-directed speech suppression regimes currently operating in the Western world. The Institute isn’t questioning them. In fact, it wants to help implement them and push them further. The summit, titled “Keeping Our Children and Families Safe in the AI Era,” is co-hosted by Common Sense Media, Save the Children Denmark, and Margrethe Vestager, who spent years as the European Commission’s executive vice president building the regulatory architecture that now lets EU officials order platforms to delete content. More than 200 policymakers, tech executives, and civil society figures are expected. King Frederik X of Denmark is giving the opening address. The Duchess of Edinburgh will attend. Danish Prime Minister Mette Frederiksen is on the bill. And so is Pinterest CEO Bill Ready, whose company helped pay for the Institute’s creation. Who’s Funding This? The Youth AI Safety Institute is bankrolled by a mix of philanthropic donors and deep industry money. The industry funders are Anthropic, the OpenAI Foundation, and Pinterest. All three make AI products that the Institute will evaluate and rate. The Institute says it “maintains complete editorial independence over published results.” But the structural incentive is obvious enough to name. Companies are funding an organization that will publish safety ratings of their competitors, define what “safe” means, and push governments to enforce those definitions through law. John Giannandrea, a former senior AI executive at both Apple and Google, sits on the Institute’s Board of Advisors. So does Murthy, who has publicly advocated for digital ID systems to combat online “misinformation” and worked directly with Big Tech companies to target speech the government classified as false during the Biden administration. Common Sense Media CEO James P. Steyer framed the project by citing the Institute’s own polling. “Eight percent—that is the share of parents across the four countries we surveyed who are confident AI companies are prioritising teen safety,” Steyer said. “For more than two decades, Common Sense Media has built the standards, ratings and research that families trust through every major technological transformation young people have lived through, from streaming to social media. Our Youth AI Safety Institute applies that work to AI: independent standards, real testing and clear accountability for the products young people use. Copenhagen is where that mission begins in Europe.” The polling, conducted across Spain, Denmark, the Netherlands, and Poland by Common Sense Media, SocialSphere, and YouGov, found that 77% of parents want strong laws governing AI. The press materials use that number to argue for “stronger laws and child-centred AI governance,” which in the context of this particular coalition means more age verification, more content restrictions, and more government involvement in deciding what AI systems are allowed to say. The Speaker List Tells You Everything Every major speaker at the Copenhagen summit has a track record of pushing for expanded government control over online speech. Clinton has backed digital ID proposals and repeatedly called for tighter restrictions on what people can say and share online. She told the summit, “Social media was a societal experiment unleashed on young people without oversight, accountability, or consequence for those who profited from it. We are still reckoning with what that cost us. AI will be more complex, more pervasive, and more consequential. That demands urgent investment, dedicated institutions, and leaders willing to be both vocal and unrelenting. Common Sense Media’s Youth AI Safety Institute is driving the kind of accountability this moment requires — and I’m looking forward to joining that global conversation in Copenhagen.” Von der Leyen, who presided over the EU’s Digital Services Act and has defended expanded speech controls alongside Macron and Merz, said, “Our children are growing up in a digital world shaped by addictive algorithms. But it should be parents, not platforms, that raise them. Together, Europe must forge a harmonised approach and set new standards. Not by rejecting technology, but by protecting our children.” Dawes runs Ofcom, the UK regulator that enforces the Online Safety Act and has already opened investigations into platforms like Telegram under its authority. Chris Sherwood heads the NSPCC, which has openly supported weakening end-to-end encryption so that platforms can scan private messages before they’re sent. That is mass surveillance of everyone’s private communications, justified by the existence of children. Murthy, who served as Surgeon General under Biden, has pushed for digital ID as a tool to fight “misinformation” and worked directly with tech companies to identify and suppress speech the government wanted gone. He told the press, “We are at great risk of making the same mistakes with AI that we made with social media: subjecting children to new technologies without adequate safety guardrails and thereby causing harm to countless lives.” Vestager called the summit “where we must act now” and described the Institute as “a key part of the global AI safety ecosystem.” Every person on this stage has supported giving governments or unaccountable regulatory bodies the power to decide what speech is acceptable. They are not even debating whether AI should be censored. They are coordinating how. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post AI Safety Institute Debuts with Big-Name Backers and a Censorship Agenda appeared first on Reclaim The Net.

Jess Phillips Resigns, Pushes Phone Scanning Law in UK
Favicon 
reclaimthenet.org

Jess Phillips Resigns, Pushes Phone Scanning Law in UK

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Stuffed inside a resignation letter about the UK’s Labour Party’s leadership crisis is a proposal that should alarm anyone who owns a phone. Jess Phillips, who stepped down as Safeguarding Minister today, spent a significant portion of her parting shot to Prime Minister Keir Starmer, complaining that the government failed to mandate technology on every phone and device in the country that would prevent children from taking explicit images. We obtained a copy of the letter for you here. Phillips framed this as child protection but what she described is device-level surveillance deployed at national scale. Her letter stated that “91% of online child sex abuse is self-generated by children groomed, tricked and exploited in to abuse,” and that she presented solutions to Starmer “over a year ago” that would “end the ability for children in the UK to take naked images of themselves.” She wanted this installed on every device in the country. The government dragged its feet for twelve months before agreeing to “even threaten to legislate in this space. Not legislate, just threaten.” Phillips called this “the definition of incremental change.” An announcement planned for March got pushed to June. She’d “given up believing it” would happen. The resignation falls during a brutal stretch for Starmer. More than 90 Labour MPs have called for him to go after disastrous local elections. Phillips told Starmer he is “a good man fundamentally, who cares about the right things” but that she’d “seen first-hand how that is not enough.” His instinct to avoid confrontation, she argued, had paralyzed the government. “The desire not to have an argument means we rarely make an argument, leaving opportunities for progress stalled and delayed.” What Phillips Was Actually Proposing In November 2025, Phillips publicly backed an Internet Watch Foundation campaign urging tech companies to install client-side scanning in encrypted messaging apps. That system checks every image against a database of known abuse material before it gets sent, scanning content on your device before encryption can protect it. The IWF called it “upload prevention.” The EU called a nearly identical proposal “Chat Control.” Both rely on the same architecture and that architecture requires software on your personal device that inspects your private content before you send it. The EU’s version collapsed after Germany and several other member states rejected it on privacy grounds. Germany’s Justice Minister compared mandatory message scanning to “opening all letters as a precautionary measure.” Signal threatened to leave the EU rather than compromise its encryption. Over 700 experts warned that narrowing the scope of such scanning “does not eliminate the serious concerns” about mass surveillance. Those concerns apply directly to what Phillips wanted. Any software capable of scanning images on a device can be updated to scan for different content. The infrastructure built to detect abuse material can search for political speech, protest coordination, or journalistic sources. The technical capability doesn’t care about the stated justification. Once it sits on every device, the question becomes who decides what it looks for and that decision moves from Parliament to software updates. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Jess Phillips Resigns, Pushes Phone Scanning Law in UK appeared first on Reclaim The Net.

The US Government Wants Agents Wearing Face Scanners
Favicon 
reclaimthenet.org

The US Government Wants Agents Wearing Face Scanners

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post The US Government Wants Agents Wearing Face Scanners appeared first on Reclaim The Net.