Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Replit CEO Sues Rep. Fine Over X Block
Favicon 
reclaimthenet.org

Replit CEO Sues Rep. Fine Over X Block

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Amjad Masad, the CEO of Replit, posted a sarcastic reply to a congressman’s social media account. The congressman blocked him. Now Masad is suing, arguing that a sitting member of Congress used his official government account to silence a constituent for saying something he didn’t like. The lawsuit, filed February 25 in the Middle District of Florida, names Rep. Randy Fine as the defendant. The claim is simple: Fine operates @RepFine as an official channel for government business, opened it to public participation, and then kicked Masad out of that public space because Masad criticized him. Courts have already ruled this to be unconstitutional. We obtained a copy of the lawsuit for you here. What did Masad actually post? Fine had published a statement from his official account: “If they force us to choose, the choice between dogs and Muslims is not a difficult one.” Masad replied: “Are you talking about what’s for lunch?” Fine blocked him shortly after. That’s the speech that got censored; a sarcastic question. The legal question the lawsuit turns on is whether @RepFine functions as a government account or a personal one. The complaint is thorough on this point. The account links to fine.house.gov. It lists Fine’s location as Washington, D.C., not Florida, which is where he lives. The account name is “Congressman Randy Fine.” Fine used it on February 19 to announce proposed federal legislation he called the “Protecting Puppies from Sharia Act.” The complaint also notes that this account is open to the entire public nationwide, not just residents of Florida’s 6th Congressional District. When a government official opens a public forum for civic participation, the First Amendment prohibits them from throwing people out of that forum for the viewpoints they express. The Supreme Court addressed this directly in Lindke v. Freed (2024), which the complaint cites. The test is whether the account’s operation is fairly attributed to the government. @RepFine clears that bar by essentially every measure. Masad is not the only person Fine blocked for pushing back. The complaint alleges that Fine also blocked Aaron Baker, a Republican congressional candidate, after Baker criticized Fine’s official positions. A separate lawsuit Baker filed on February 20 makes the same First Amendment claim. The pattern the complaint describes is selective: dissenting voices removed, favorable commentary left alone. That’s precisely what the First Amendment prohibits. The lawsuit asks the court to declare Fine’s block unconstitutional, order him to unblock Masad, and award attorneys’ fees. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Replit CEO Sues Rep. Fine Over X Block appeared first on Reclaim The Net.

Samsung Bakes Perplexity AI Into Galaxy S26 OS, Raising Privacy Concerns
Favicon 
reclaimthenet.org

Samsung Bakes Perplexity AI Into Galaxy S26 OS, Raising Privacy Concerns

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Perplexity is now baked into Samsung’s Galaxy S26 at the operating system level, and it didn’t ask for your permission first. Perplexity announced the partnership by boasting that the company has “OS-level access to 100M+ Samsung S26s.” That framing, chosen by Perplexity itself, tells you something. We’re not talking about a downloadable app you can ignore. Perplexity’s Sonar API connects directly to Notes, Calendar, Gallery, Clock, and Reminders. A dedicated “Hey Plex” wake word summons a standalone assistant. Bixby can now routes its search queries through Perplexity’s cloud. Perplexity APIs will also power Samsung’s browser. It’s Perplexity all the way down. This is the first time Samsung has handed OS-level access to a company that isn’t Samsung or Google. John Scott-Railton, a senior researcher at Citizen Lab, spotted the problem immediately. Perplexity’s announcement contained “zero mention” of privacy, security, or encryption. The integration, he explained, “breaks” Android’s “baseline sandbox model.” Android’s security architecture keeps third-party apps isolated from each other. For example, TikTok can’t read your private notes because sandboxing prevents it. Perplexity now sits outside that model, “making a kernel-adjacent data bridge for Perplexity into your personal stuff.” Scott-Railton also flagged that the risk of “prompt injection & other attacks against an agentic AI that has OS-level access to personal stuff is also real.” An AI agent with deep system access isn’t just a privacy problem. It’s an attack surface. The architecture makes the data exposure worse than it looks on paper. The Galaxy S26 routes your queries between three separate companies: Samsung, Perplexity, and Google Gemini, depending on which wake word you used, what you asked, and what context the system inferred. Sooraj Sathyanarayanan, a security and privacy researcher, mapped out what that actually means for your data: “three separate cloud pipelines, three separate retention policies, three separate training practices.” One question to your phone, three corporate data regimes absorbing the answer. Samsung points to its “Process Data Only on Device” toggle as a privacy safeguard. Sathyanarayanan is direct about what that toggle actually does: “The moment Bixby or Plex needs the web, your local data context goes to the cloud.” Perplexity’s entire value proposition is real-time web retrieval. The toggle evaporates the moment the AI does anything useful. “The toggle is theater,” Sathyanarayanan wrote. What Samsung has actually shipped is, in his description, “a multi-party data harvesting pipeline with system-level permissions.” None of this is arriving without a paper trail of warnings. Singapore-based mobile security firm Appknox audited Perplexity’s Android app in April 2025 and found ten significant vulnerabilities. The list included hardcoded API keys embedded directly in the app’s code, which any attacker who decompiles the app can extract and use to access Perplexity’s backend services. The app lacked SSL certificate pinning, leaving it open to interception attacks. It had no bytecode obfuscation, making its code trivially easy to reverse-engineer. It was also vulnerable to StrandHogg, a known Android flaw that lets attackers overlay fake interfaces on top of legitimate ones to steal credentials. Appknox CEO Subho Halder described the findings plainly: “Our testing highlights critical vulnerabilities in Perplexity AI that expose users to a variety of risks, including data theft, reverse engineering, and exploitation.” He called on Perplexity to address the issues “swiftly.” That was ten months ago. Those vulnerabilities appear to remain unaddressed. Samsung has now elevated that same app to kernel-adjacent OS access on its flagship device. The company framing this as a privacy story, complete with a hardware-level privacy display on the S26 Ultra to block shoulder-surfers, handed the keys to your notes and photo library to a company whose Android app has documented, unpatched security flaws. Samsung calls the overall experience “seamless.” The technical documentation says something different. The Perplexity-Samsung situation is the clearest expression yet of a pattern accelerating across the entire tech industry: AI capabilities are being pushed deeper into devices, operating systems, and everyday software, and the privacy architecture was never designed to handle it. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Samsung Bakes Perplexity AI Into Galaxy S26 OS, Raising Privacy Concerns appeared first on Reclaim The Net.

Security Researchers Warn Age Verification Laws Are Building a Global Surveillance System
Favicon 
reclaimthenet.org

Security Researchers Warn Age Verification Laws Are Building a Global Surveillance System

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Three hundred and seventy-one security and privacy academics from 29 countries signed an open letter this week calling on governments to halt age verification rollouts until the privacy and security implications are properly understood. The letter arrives as lawmakers across the world race to ban children from social media, pushing platforms to implement age checks before anyone has settled on what those checks should actually look like. The signatories are unambiguous. Deploying large-scale identity verification systems without a clear grasp of what they do to user security, autonomy, and freedom is, in their words, “dangerous and socially unacceptable.” Among those signing: Ronald Rivest, Turing Award winner, and Bart Preneel, president of the International Association for Cryptologic Research. These voices represent the core of the global security research community. What governments are building, the letter argues, is surveillance infrastructure masquerading as child protection. A real age verification system, the academics explain, would require “government-issued IDs with strong cryptographic protection for every single interaction with the service.” That means every search query, every message to a friend, every news article read online would require identity confirmation. Nothing in offline life demands that. The parallel doesn’t exist. Companies are already moving. OpenAI, Roblox, and Discord have all begun implementing age checks in anticipation of legal mandates. The academics aren’t dismissing the underlying concern. “We share the concerns about the negative effects that exposure to harmful content online has on children,” the letter states. What they’re rejecting is the proposed solution, which turns every adult into a suspect who must prove their identity before accessing the open web. The technical problems compound the political ones. Building and maintaining identity verification at a global scale is genuinely hard. Many service providers, faced with the friction and cost, would simply refuse to comply. And the platforms that can deploy these systems at scale are a handful of large corporations, meaning age verification becomes another mechanism for centralizing internet infrastructure in the hands of the few companies already dominant enough to afford it. There’s another risk the academics name directly: governments banning VPNs. Age checks are trivially circumvented with a VPN, and the predictable policy response is to ban them outright. VPNs are currently one of the few tools available to people living under authoritarian regimes trying to protect their communications and identities. Banning VPNs to enforce age checks on teenagers would strip that protection from dissidents, journalists, and activists worldwide. The collateral damage would be severe and global. The academics are asking for a pause until scientific consensus forms around “the benefits and harms that age-assurance technologies can bring, and on the technical feasibility.” What’s unreasonable is building mass identity verification systems first and studying the consequences after. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Security Researchers Warn Age Verification Laws Are Building a Global Surveillance System appeared first on Reclaim The Net.

Judge Blocks Virginia’s One-Hour Social Media Limit for Minors as Unconstitutional
Favicon 
reclaimthenet.org

Judge Blocks Virginia’s One-Hour Social Media Limit for Minors as Unconstitutional

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A federal judge has blocked Virginia’s attempt to limit minors to one hour of social media per day, ruling the law violates the First Amendment. The decision is a significant check on a growing wave of state legislation that treats time spent reading, watching, and communicating online as something the government can ration. Judge Patricia Tolliver Giles issued the preliminary injunction Friday, finding that Virginia “does not have the legal authority to block minors’ access to constitutionally protected speech until their parents give their consent by overriding a government-imposed default limit.” We obtained a copy of the opinion for you here. The ruling halts enforcement of Senate Bill 854, which carried fines of $7,500 per violation and required platforms to use “commercially reasonable methods” to verify user ages. The law’s problem wasn’t just the one-hour cap. It was how the cap worked. The state set the default, and parents could ask to change it. That structure puts the government, not families, in control of baseline access to speech. Parental consent here overrides a government restriction that shouldn’t exist in the first place. Giles found the law over-inclusive in a way that illustrates exactly how blunt these restrictions are. “A minor would be barred from watching an online church service if it exceeded an hour on YouTube,” she wrote, “yet, that same minor is allowed to watch provider-selected religious programming exceeding an hour in length on a streaming platform.” The law doesn’t regulate harm. It regulates platforms, which means it catches protected speech indiscriminately. NetChoice, the trade association whose members include Meta, YouTube, Snap, Reddit, and TikTok, sued to stop the law. In November, NetChoice argued that “Virginia has with one broad stroke restricted access to valuable sources for speaking and listening, learning about current events and otherwise exploring the vast realms of human thought and knowledge.” The judge agreed they had standing to pursue a permanent block and found they were likely to succeed on the merits. Virginia’s attorney general is defending the law alongside 29 other states from both parties. A spokesperson said: “We look forward to continuing to enforce laws that empower parents to protect their children from the proven harms that can come through social media.” The new Democratic attorney-general Jay Jones, who took office in January, had announced he intended to fully enforce the law signed by his Republican predecessor, Glenn Youngkin. The ruling won’t settle the broader fight. A similar law in Mississippi was upheld by a different federal judge, meaning the courts are moving in different directions. The Virginia decision is important because it applies First Amendment scrutiny to the mechanism, not just the stated goal. A law that restricts access to church services, news, and online communities to address it is a restriction on speech. Courts have historically treated the government’s ability to limit protected speech as narrow, and Giles found Virginia hadn’t come close to justifying this one. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Judge Blocks Virginia’s One-Hour Social Media Limit for Minors as Unconstitutional appeared first on Reclaim The Net.

California Law Forces Age-Tracking Into Every Operating System by 2027
Favicon 
reclaimthenet.org

California Law Forces Age-Tracking Into Every Operating System by 2027

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. California wants to build a surveillance layer into every device its residents touch. Assembly Bill 1043, signed by Governor Gavin Newsom and taking effect January 1, 2027, requires every operating system provider to collect age information from users at account setup and broadcast that data to app developers through a real-time API. Windows, macOS, Android, iOS, Linux distributions, Valve’s SteamOS: if it runs an operating system, it’s covered by this overreaching law. The proposals are particularly dumb for open-source Linux operating systems. Linux exists specifically because some people want computing that doesn’t surveil them. That’s not incidental to why the platform exists; it’s foundational. Distributions like Arch, Debian, and Gentoo have no centralized account infrastructure by design. Users download ISOs from mirrors, modify source code freely, and run systems that report to nobody. AB 1043 treats the entire architecture of open-source computing as a compliance problem. But there’s no entity to mandate, no account system to modify, no API to build. The law’s definition of an “operating system provider” is deliberately broad, covering anyone who “develops, licenses, or controls the operating system software on a computer, mobile device, or any other general purpose computing device.” That the law actually builds is a persistent age-signaling infrastructure woven into the startup process of your devices. OS providers must maintain what the bill calls a “reasonably consistent real-time application programming interface” that sorts users into four age brackets: under 13, 13 to under 16, 16 to under 18, and 18 or older. Every app developer who requests that signal when their app launches receives it automatically. Your age category follows you from device to device, app to app, without you actively consenting to each disclosure. The bill passed unanimously, 76-0 in the Assembly and 38-0 in the Senate. Assemblymember Buffy Wicks, who authored the bill, said it “avoids constitutional concerns by focusing strictly on age assurance, not content moderation.” Wicks’s odd claim doesn’t hold up to much scrutiny. Age assurance is content moderation’s prerequisite. The entire point of collecting age signals and broadcasting them to every app developer is to enable those developers to restrict what different age groups can see. The infrastructure AB 1043 builds has no other purpose. Sorting users into age brackets at the OS level and piping that data to app developers in real time is the mechanism by which content gets moderated; calling it something else doesn’t change what it does. AB 1043, surprisingly, isn’t the worst bill of its kind, though. It doesn’t require government ID uploads or facial scans; users simply declare their age at setup. That distinguishes it from laws in Texas and Utah requiring “commercially reasonable” verification like government-issued ID checks. The tradeoff is that California gets weaker age verification but broader infrastructure: a persistent age-signaling layer embedded in every device, broadcasting your age bracket to every developer who asks for the life of that device. Developers who receive the signal are “deemed to have actual knowledge” of their users’ age range under the law. That change in legal liability is the mechanism that makes the whole system work. Penalties run up to $2,500 per affected child for negligent violations and $7,500 for intentional ones, enforced by the California Attorney General. Developers now have strong financial incentives to request every age signal available, meaning the API will see constant use across the app ecosystem. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post California Law Forces Age-Tracking Into Every Operating System by 2027 appeared first on Reclaim The Net.