Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech
Favicon 
reclaimthenet.org

UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots. The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay. The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation. The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination. Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.” The actual policy tools being considered are a different matter. Age verification, as a mechanism, works by proving identity. Every user proves who they are. A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes. The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others. Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch. The government, through a campaign website, is now actively encouraging parents to funnel reports of “hate speech” to the same private companies that define what hate speech is. There’s no independent standard, no legal definition that applies consistently, and no oversight of what platforms do with those reports. Just a government directing citizen complaints into Big Tech’s moderation queues and presenting that as a safety feature. “Hate speech” is one of those categories that sounds precise until you ask who decides. Platforms decide. They always have. What the government has done here is lend its authority to that process, making Big Tech’s internal moderation systems look like public infrastructure. They are not public infrastructure. They are corporate policies, applied inconsistently, without appeal, and with no democratic accountability. The broader consultation asks whether the “digital age of consent” should be raised, whether mobile phone guidance for schools should become statutory, and how parental controls should be simplified. Education Secretary Bridget Phillipson said: “Technology is fundamentally changing childhood. Used well, it can open up new opportunities for learning, creativity, and connection, but only if we get the balance right.” The balance the government is currently striking tilts heavily toward control. Mandatory curfews would let the government decide when young people can be online. Age verification would require platforms to know who everyone is. A reporting infrastructure has already been built to direct public complaints toward private censorship tools. The consultation is running in parallel with the architecture that doesn’t need it. The chilling effect starts well before any of this becomes law. Teenagers already know these restrictions are coming. Parents are already being encouraged to report their children’s online interactions to platforms. Publishers and platforms, watching the legal powers that now allow ministers to act without fresh legislation, are starting to think about what they’ll need to do before they’re told to. That’s how it works. The threat is often enough. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech appeared first on Reclaim The Net.

Replit CEO Sues Rep. Fine Over X Block
Favicon 
reclaimthenet.org

Replit CEO Sues Rep. Fine Over X Block

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Amjad Masad, the CEO of Replit, posted a sarcastic reply to a congressman’s social media account. The congressman blocked him. Now Masad is suing, arguing that a sitting member of Congress used his official government account to silence a constituent for saying something he didn’t like. The lawsuit, filed February 25 in the Middle District of Florida, names Rep. Randy Fine as the defendant. The claim is simple: Fine operates @RepFine as an official channel for government business, opened it to public participation, and then kicked Masad out of that public space because Masad criticized him. Courts have already ruled this to be unconstitutional. We obtained a copy of the lawsuit for you here. What did Masad actually post? Fine had published a statement from his official account: “If they force us to choose, the choice between dogs and Muslims is not a difficult one.” Masad replied: “Are you talking about what’s for lunch?” Fine blocked him shortly after. That’s the speech that got censored; a sarcastic question. The legal question the lawsuit turns on is whether @RepFine functions as a government account or a personal one. The complaint is thorough on this point. The account links to fine.house.gov. It lists Fine’s location as Washington, D.C., not Florida, which is where he lives. The account name is “Congressman Randy Fine.” Fine used it on February 19 to announce proposed federal legislation he called the “Protecting Puppies from Sharia Act.” The complaint also notes that this account is open to the entire public nationwide, not just residents of Florida’s 6th Congressional District. When a government official opens a public forum for civic participation, the First Amendment prohibits them from throwing people out of that forum for the viewpoints they express. The Supreme Court addressed this directly in Lindke v. Freed (2024), which the complaint cites. The test is whether the account’s operation is fairly attributed to the government. @RepFine clears that bar by essentially every measure. Masad is not the only person Fine blocked for pushing back. The complaint alleges that Fine also blocked Aaron Baker, a Republican congressional candidate, after Baker criticized Fine’s official positions. A separate lawsuit Baker filed on February 20 makes the same First Amendment claim. The pattern the complaint describes is selective: dissenting voices removed, favorable commentary left alone. That’s precisely what the First Amendment prohibits. The lawsuit asks the court to declare Fine’s block unconstitutional, order him to unblock Masad, and award attorneys’ fees. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Replit CEO Sues Rep. Fine Over X Block appeared first on Reclaim The Net.

Samsung Bakes Perplexity AI Into Galaxy S26 OS, Raising Privacy Concerns
Favicon 
reclaimthenet.org

Samsung Bakes Perplexity AI Into Galaxy S26 OS, Raising Privacy Concerns

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Perplexity is now baked into Samsung’s Galaxy S26 at the operating system level, and it didn’t ask for your permission first. Perplexity announced the partnership by boasting that the company has “OS-level access to 100M+ Samsung S26s.” That framing, chosen by Perplexity itself, tells you something. We’re not talking about a downloadable app you can ignore. Perplexity’s Sonar API connects directly to Notes, Calendar, Gallery, Clock, and Reminders. A dedicated “Hey Plex” wake word summons a standalone assistant. Bixby can now routes its search queries through Perplexity’s cloud. Perplexity APIs will also power Samsung’s browser. It’s Perplexity all the way down. This is the first time Samsung has handed OS-level access to a company that isn’t Samsung or Google. John Scott-Railton, a senior researcher at Citizen Lab, spotted the problem immediately. Perplexity’s announcement contained “zero mention” of privacy, security, or encryption. The integration, he explained, “breaks” Android’s “baseline sandbox model.” Android’s security architecture keeps third-party apps isolated from each other. For example, TikTok can’t read your private notes because sandboxing prevents it. Perplexity now sits outside that model, “making a kernel-adjacent data bridge for Perplexity into your personal stuff.” Scott-Railton also flagged that the risk of “prompt injection & other attacks against an agentic AI that has OS-level access to personal stuff is also real.” An AI agent with deep system access isn’t just a privacy problem. It’s an attack surface. The architecture makes the data exposure worse than it looks on paper. The Galaxy S26 routes your queries between three separate companies: Samsung, Perplexity, and Google Gemini, depending on which wake word you used, what you asked, and what context the system inferred. Sooraj Sathyanarayanan, a security and privacy researcher, mapped out what that actually means for your data: “three separate cloud pipelines, three separate retention policies, three separate training practices.” One question to your phone, three corporate data regimes absorbing the answer. Samsung points to its “Process Data Only on Device” toggle as a privacy safeguard. Sathyanarayanan is direct about what that toggle actually does: “The moment Bixby or Plex needs the web, your local data context goes to the cloud.” Perplexity’s entire value proposition is real-time web retrieval. The toggle evaporates the moment the AI does anything useful. “The toggle is theater,” Sathyanarayanan wrote. What Samsung has actually shipped is, in his description, “a multi-party data harvesting pipeline with system-level permissions.” None of this is arriving without a paper trail of warnings. Singapore-based mobile security firm Appknox audited Perplexity’s Android app in April 2025 and found ten significant vulnerabilities. The list included hardcoded API keys embedded directly in the app’s code, which any attacker who decompiles the app can extract and use to access Perplexity’s backend services. The app lacked SSL certificate pinning, leaving it open to interception attacks. It had no bytecode obfuscation, making its code trivially easy to reverse-engineer. It was also vulnerable to StrandHogg, a known Android flaw that lets attackers overlay fake interfaces on top of legitimate ones to steal credentials. Appknox CEO Subho Halder described the findings plainly: “Our testing highlights critical vulnerabilities in Perplexity AI that expose users to a variety of risks, including data theft, reverse engineering, and exploitation.” He called on Perplexity to address the issues “swiftly.” That was ten months ago. Those vulnerabilities appear to remain unaddressed. Samsung has now elevated that same app to kernel-adjacent OS access on its flagship device. The company framing this as a privacy story, complete with a hardware-level privacy display on the S26 Ultra to block shoulder-surfers, handed the keys to your notes and photo library to a company whose Android app has documented, unpatched security flaws. Samsung calls the overall experience “seamless.” The technical documentation says something different. The Perplexity-Samsung situation is the clearest expression yet of a pattern accelerating across the entire tech industry: AI capabilities are being pushed deeper into devices, operating systems, and everyday software, and the privacy architecture was never designed to handle it. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Samsung Bakes Perplexity AI Into Galaxy S26 OS, Raising Privacy Concerns appeared first on Reclaim The Net.

Security Researchers Warn Age Verification Laws Are Building a Global Surveillance System
Favicon 
reclaimthenet.org

Security Researchers Warn Age Verification Laws Are Building a Global Surveillance System

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Three hundred and seventy-one security and privacy academics from 29 countries signed an open letter this week calling on governments to halt age verification rollouts until the privacy and security implications are properly understood. The letter arrives as lawmakers across the world race to ban children from social media, pushing platforms to implement age checks before anyone has settled on what those checks should actually look like. The signatories are unambiguous. Deploying large-scale identity verification systems without a clear grasp of what they do to user security, autonomy, and freedom is, in their words, “dangerous and socially unacceptable.” Among those signing: Ronald Rivest, Turing Award winner, and Bart Preneel, president of the International Association for Cryptologic Research. These voices represent the core of the global security research community. What governments are building, the letter argues, is surveillance infrastructure masquerading as child protection. A real age verification system, the academics explain, would require “government-issued IDs with strong cryptographic protection for every single interaction with the service.” That means every search query, every message to a friend, every news article read online would require identity confirmation. Nothing in offline life demands that. The parallel doesn’t exist. Companies are already moving. OpenAI, Roblox, and Discord have all begun implementing age checks in anticipation of legal mandates. The academics aren’t dismissing the underlying concern. “We share the concerns about the negative effects that exposure to harmful content online has on children,” the letter states. What they’re rejecting is the proposed solution, which turns every adult into a suspect who must prove their identity before accessing the open web. The technical problems compound the political ones. Building and maintaining identity verification at a global scale is genuinely hard. Many service providers, faced with the friction and cost, would simply refuse to comply. And the platforms that can deploy these systems at scale are a handful of large corporations, meaning age verification becomes another mechanism for centralizing internet infrastructure in the hands of the few companies already dominant enough to afford it. There’s another risk the academics name directly: governments banning VPNs. Age checks are trivially circumvented with a VPN, and the predictable policy response is to ban them outright. VPNs are currently one of the few tools available to people living under authoritarian regimes trying to protect their communications and identities. Banning VPNs to enforce age checks on teenagers would strip that protection from dissidents, journalists, and activists worldwide. The collateral damage would be severe and global. The academics are asking for a pause until scientific consensus forms around “the benefits and harms that age-assurance technologies can bring, and on the technical feasibility.” What’s unreasonable is building mass identity verification systems first and studying the consequences after. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Security Researchers Warn Age Verification Laws Are Building a Global Surveillance System appeared first on Reclaim The Net.

Judge Blocks Virginia’s One-Hour Social Media Limit for Minors as Unconstitutional
Favicon 
reclaimthenet.org

Judge Blocks Virginia’s One-Hour Social Media Limit for Minors as Unconstitutional

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A federal judge has blocked Virginia’s attempt to limit minors to one hour of social media per day, ruling the law violates the First Amendment. The decision is a significant check on a growing wave of state legislation that treats time spent reading, watching, and communicating online as something the government can ration. Judge Patricia Tolliver Giles issued the preliminary injunction Friday, finding that Virginia “does not have the legal authority to block minors’ access to constitutionally protected speech until their parents give their consent by overriding a government-imposed default limit.” We obtained a copy of the opinion for you here. The ruling halts enforcement of Senate Bill 854, which carried fines of $7,500 per violation and required platforms to use “commercially reasonable methods” to verify user ages. The law’s problem wasn’t just the one-hour cap. It was how the cap worked. The state set the default, and parents could ask to change it. That structure puts the government, not families, in control of baseline access to speech. Parental consent here overrides a government restriction that shouldn’t exist in the first place. Giles found the law over-inclusive in a way that illustrates exactly how blunt these restrictions are. “A minor would be barred from watching an online church service if it exceeded an hour on YouTube,” she wrote, “yet, that same minor is allowed to watch provider-selected religious programming exceeding an hour in length on a streaming platform.” The law doesn’t regulate harm. It regulates platforms, which means it catches protected speech indiscriminately. NetChoice, the trade association whose members include Meta, YouTube, Snap, Reddit, and TikTok, sued to stop the law. In November, NetChoice argued that “Virginia has with one broad stroke restricted access to valuable sources for speaking and listening, learning about current events and otherwise exploring the vast realms of human thought and knowledge.” The judge agreed they had standing to pursue a permanent block and found they were likely to succeed on the merits. Virginia’s attorney general is defending the law alongside 29 other states from both parties. A spokesperson said: “We look forward to continuing to enforce laws that empower parents to protect their children from the proven harms that can come through social media.” The new Democratic attorney-general Jay Jones, who took office in January, had announced he intended to fully enforce the law signed by his Republican predecessor, Glenn Youngkin. The ruling won’t settle the broader fight. A similar law in Mississippi was upheld by a different federal judge, meaning the courts are moving in different directions. The Virginia decision is important because it applies First Amendment scrutiny to the mechanism, not just the stated goal. A law that restricts access to church services, news, and online communities to address it is a restriction on speech. Courts have historically treated the government’s ability to limit protected speech as narrow, and Giles found Virginia hadn’t come close to justifying this one. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Judge Blocks Virginia’s One-Hour Social Media Limit for Minors as Unconstitutional appeared first on Reclaim The Net.