Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

The Age Verification Con
Favicon 
reclaimthenet.org

The Age Verification Con

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Politicians on both sides of the Atlantic are competing to look tough on Silicon Valley. They hold hearings, write bills, and pose for photographs with parents who say their kids’ lives were ruined by social media algorithms they somehow couldn’t pull them away from. The cause is protecting children from social media, and it supposedly polls so well that it has achieved something almost unheard of in modern politics: genuine bipartisan consensus. Republicans and Democrats in Washington. Labour and Conservatives in Westminster. The Australian parliament voted the whole thing through with barely a whisper of dissent. There is just one problem with the narrative. The tech giants these politicians claim to be fighting are spending record sums to help them do it. And the tool they have all converged on, age verification, is not really about checking whether someone is 15 or 16. It is the architecture for a verified internet, one where anonymous access is replaced by identity checkpoints, and where using a social media account, downloading an app, or browsing a website requires you to show your papers first. The campaign is presented as protecting children. The infrastructure being built will apply to everyone. The Political Performance Keir Starmer set the tone in February when he announced plans to push through age restrictions for social media far faster than the eight years it took to grind the Online Safety Act through Parliament. “Technology is moving really fast, and the law has got to keep up,” the British Prime Minister said. He followed up with a direct challenge to the platforms: “And if that means a fight with the big social media companies, then bring it on.” He went further this week, casting the issue as a moral confrontation: “Some of this will require a fight. If we’re going to do more to protect children, we’re going to have to fight some of the platforms that are putting the material up there because they’re putting this addictive stuff up there for a reason. They want more children to spend more time online and we’ve got to fight them and be clear whose side we’re on here.” The rhetoric plays well to some. Starmer is a father of two teenagers, a fact he mentions regularly, and he has positioned himself as the parent-in-chief who understands what families are dealing with. Technology Secretary Liz Kendall has talked about wanting to announce a social media ban for under-16s by summer, and she has floated the threat of fines or outright blocking for platforms that break the law in the UK. The posture is this: government versus Big Tech, parents versus algorithms, democracy versus corporate greed. Keep that framing in mind, because the money tells a different story. Australia got there first. Its Social Media Minimum Age Act, which took effect on December 10 2025, bans under-16s from holding accounts on platforms including Facebook, Instagram, TikTok, Snapchat, X, YouTube, and Reddit. By mid-January 2026, more than 4.7 million accounts had been deactivated, removed, or restricted. Platforms that fail to take “reasonable steps” to keep minors off face fines of up to 49.5 million Australian dollars. The eSafety Commissioner, Julie Inman Grant, has become the international face of this movement. Recognized by Time Magazine’s Global Health 100 for 2026, she has described the age restrictions as part of a “holistic approach to protecting children online.” She has registered a series of industry codes expanding platform obligations and is overseeing the rollout of age assurance technologies across the country. She has also attracted attention from US Congressman Jim Jordan, who summoned her to testify before the House Judiciary Committee on allegations of global censorship demands. Inman Grant, who spent 17 years at Microsoft and later worked at Twitter, has pushed back, calling it “a very unprecedented request for another legislative body to try and compel a senior bureaucrat from another government doing the job that the government set out for her to do.” France followed quickly. In January 2026, the National Assembly voted 130-21 to ban social media for children under 15, with enforcement planned for the start of the school year in September 2026. President Emmanuel Macron fast-tracked the legislation and framed it in characteristically grand terms: “Because our children’s brains are not for sale — neither to American platforms nor to Chinese networks. Because their dreams must not be dictated by algorithms.” France had tried this once before, in 2023, with a law establishing a “digital age of consent” at 15. That version never took effect because it clashed with EU regulations. The new text is designed to align with the Digital Services Act, and Macron has signaled he wants harmonized rules across the entire bloc. That push is already underway. In November 2025, the European Parliament voted to recommend an EU-wide minimum age of 16 for social media access. The European Commission has built what it calls an age verification “mini wallet,” a prototype app aligned with the European Digital Identity Wallets that every EU member state is expected to roll out by the end of 2026. Denmark, France, Greece, Italy, and Spain are piloting the system. In June 2025, 21 ministers from 13 member states signed a joint declaration calling the existing framework “insufficient” and demanding mandatory age verification on all social networks. EC President Ursula von der Leyen set the tone at a September 2025 summit, declaring that “parents, not algorithms, should be raising children.” The EU’s age verification blueprint is built on the same technical specifications as its forthcoming digital identity wallets, ensuring that what begins as a child safety tool becomes part of a permanent identity infrastructure across the continent. From Canberra to Brussels, the pattern is identical. Politicians frame themselves as taking on powerful tech companies. They use the language of confrontation, of fighting, of whose side we’re on. What none of them mention is that the world’s largest social media company is lobbying harder and spending more money than anyone to make sure these exact laws get passed. The American version of this push has multiple fronts. Senator Ted Cruz, the Republican chair of the Senate Commerce Committee, teamed up with Democrat Brian Schatz to introduce the Kids Off Social Media Act, which would set a minimum age of 13 for social media accounts and ban platforms from serving algorithmically targeted content to anyone under 17. “Kids need time to be kids to experience the real world, not to get lost in the virtual one,” Cruz said at a committee markup. The bill passed the Commerce Committee with overwhelming bipartisan support. It is far from the only proposal. Senators Marsha Blackburn and Richard Blumenthal reintroduced the Kids Online Safety Act (KOSA) with the backing of Senate Majority Leader John Thune and Minority Leader Chuck Schumer. The bill would create a “duty of care” requiring platforms to proactively prevent a list of harms, including eating disorders, depression, anxiety, and “patterns of compulsive use.” “Big Tech platforms have shown time and time again they will always prioritize their bottom line over the safety of our children,” Blackburn said. Senator Chris Murphy added: “As a parent, I’ve seen firsthand how these platforms use intentionally addictive algorithms to spoon-feed young people horrifying content glorifying everything from suicide to eating disorders.” The cosponsor list is bipartisan: Katie Britt, John Fetterman, Peter Welch, Ted Budd, Angus King, and Mark Warner. Senator Schatz captured the mood: “When you’ve got Ted Cruz and myself in agreement on something, you’ve pretty much captured the ideological spectrum of the whole Congress.” Everyone agrees the children must be protected. The question nobody seems to want to answer is what the protection actually looks like, who benefits from the particular form it’s taking when the state gets involved, and why the companies supposedly being punished are spending billions to make it happen. What Age Verification Actually Means Every one of these proposals requires the same thing: knowing how old the person behind the screen is. That sounds simple enough. But the mechanism for knowing someone’s age online is the mechanism for knowing their identity. And once you build the system that verifies identity, you have built the system that can track, restrict, and control what people access. Age verification is identity verification, repackaged with a child safety label. The practical consequence of every proposal now moving through legislatures in Washington, Westminster, Canberra, Paris, and Brussels is the same: the end of anonymous access to the internet. You will need to prove who you are before you post, before you browse, before you download an app. The question of whether a 14-year-old can use Instagram becomes the mechanism by which every adult is required to show a government-issued ID to use their own phone. Australia’s eSafety Commissioner has said platforms can no longer rely on users simply entering a birthdate at sign-up. They are expected to stop people from faking their age using false documents, AI tools, deepfakes, and even VPNs. The methods under consideration include facial age estimation, where AI scans a selfie to guess how old someone looks, credit card verification, and government-issued ID checks. The legislation technically prohibits platforms from requiring government ID as the only option, but the alternatives all involve some form of biometric or financial identity data. The UK consultation, launched in March 2026 under the title “Growing up in the online world,” is considering an under-16 ban and measures to stop children using VPNs to circumvent restrictions. The countries whose governments currently restrict VPN usage include China, Russia, Iran, North Korea, and Turkey. KOSA, the American bill, would direct federal agencies to develop age verification at the device or operating system level. That is the endpoint every version of this legislation points toward: your phone verifying your identity before you can use it. Apple Goes Further Than the Law Requires This week, Apple demonstrated what that future looks like. With the release of iOS 26.4 on 24 March 2026, UK iPhone users were confronted with a mandatory prompt: “Confirm You Are 18+.” The options are to scan a credit card or a government-issued ID. Debit cards are not accepted. Passports are reportedly failing for many users. Those who cannot or will not verify their age get locked into a restricted version of their own device, with content filters turned on across Safari and third-party browsers, communication safety features activated in Messages and FaceTime, and access to age-restricted apps blocked. Here is the detail to note: the Online Safety Act does not require Apple to do this. The law applies to websites and platforms, not to operating systems or app stores. Apple chose to go beyond the legislation. Ofcom, the UK regulator, welcomed the move, calling it “a real win for children and families.” Apple has been “working closely” with the regulator, Ofcom said. Users have been reporting problems immediately. People in their 50s, 60s, 70s, and 80s with decades-old Apple accounts found themselves locked into child-restricted modes because their credit card scan failed or their driving license would not register. A 57-year-old user on Apple’s support forums wrote that they do not have a credit card, and the scanner will not read a driving license. “Guess I’ll be forever under 18!” Some users have described it as “regulatory ransomware.” Apple says the process is handled on-device and that scanned information is not stored, but the company has not documented exactly which signals trigger the verification flow. It has built a five-tier age rating system for the App Store (4+, 9+, 13+, 16+, 18+) and created a Declared Age Range API that lets developers request a user’s age bracket without receiving a birthdate. What Apple has built is a prototype for the verified internet. Once the device knows who you are, every app and every website you access through that device can be filtered according to that identity. The infrastructure for it is now installed on every iPhone updated to iOS 26.4 in the UK. Proton, the encrypted email and VPN provider, published an analysis this week noting that a system designed to confirm age can be adapted to confirm any attribute tied to identity. “When identity becomes part of the access layer,” Proton wrote, “restrictions can be applied with greater consistency and less reliance on individual platforms.” The conditions travel with the system. So Apple is volunteering to build identity infrastructure that the law does not require, and the regulator is cheering. That alone should complicate the story politicians are telling about brave governments standing up to reluctant tech companies. But it gets worse. Zuckerberg Hands Over the Blueprint Meta CEO Mark Zuckerberg spent more than five hours on the witness stand in Los Angeles Superior Court in early March, testifying in a child safety lawsuit where a jury eventually found Meta and YouTube negligent in the design of their platforms and awarded $3 million in damages. Under cross-examination, prosecutors showed internal emails including a 2015 estimate that 4 million users under 13 were on Instagram, roughly 30 percent of all American children aged 10 to 12. An old email from former public policy head Nick Clegg (and, keep in mind, a former Deputy UK Prime Minister) was read into the record: “The fact that we say we don’t allow under-13s on our platform, yet have no way of enforcing it, is just indefensible.” Zuckerberg’s response, repeated multiple times from the witness stand, was to call for age verification at the operating system level, handled by Apple and Google rather than by individual apps. He told jurors that operating system providers “were better positioned to implement age verification tools, since they control the software that runs most smartphones.” He added: “Doing it at the level of the phone is just a lot cleaner than having every single app out there have to do this separately.” Think about what the CEO of the world’s largest social media company proposed while under oath. Not that Instagram would verify the ages of its users. That Apple and Google should verify the identity of every smartphone user, for every app, at the operating system level. Every app installed on the device, every website accessed through the phone’s browser, every message sent through any app on the phone. The proposal solves Zuckerberg’s immediate legal problem. If Apple and Google own age enforcement, Meta is no longer responsible for enforcement failures or the costs of implementation. It also solves something much bigger for him… The Business Case for Killing Anonymity To understand why Meta is not resisting age verification but actively pushing for it, you have to understand what identity verification does for a social media company’s bottom line. Social media platforms have a bot problem, and it is getting worse. A 2024 report from data security firm Imperva found that over half of all internet traffic was non-human, with 37 percent consisting of malicious bots, a five percent increase from the previous year. Cybersecurity reports estimate that 8 to 12 percent of all social media profiles across major platforms are fake, automated, or impersonation accounts. On networks with billions of users, that translates to hundreds of millions of questionable profiles operating at any given time. AI has made it dramatically worse: bots in 2026 can hold conversations, generate realistic replies, and mimic human behavior well enough to fool most users. The FTC has reported to Congress on the use of social media bots in online advertising, highlighting how fake engagement may constitute a deceptive practice. This is a problem for advertisers. They are paying to reach real people, and they are getting bots. Advertisers have been pressing platforms to guarantee that their ads are reaching verified human beings, not automated accounts inflating engagement numbers. Identity verification at the device or platform level would solve this problem overnight. If every user has to prove they are a real person with a real ID, the bot problem disappears, and the advertising inventory becomes dramatically more valuable. Every impression is suddenly verifiable. Every click comes from a confirmed identity. For a company that made $201 billion in revenue in 2025, almost entirely from advertising, the commercial incentive to support mandatory identity verification is enormous. There is another commercial benefit that nobody in these legislative hearings is talking about. A verified, identity-linked internet is an internet where controversial speech is easier to suppress. Advertisers have spent years pressuring platforms to keep their ads away from content that might generate negative brand associations. “Brand safety” is the industry term. It means ensuring that an advertisement for a family car does not appear next to a heated political argument, a conspiracy theory, or a piece of journalism that names a powerful company. Platforms that can demonstrate a sanitized, identity-verified user base with robust content controls can charge premium rates for advertising. A less anonymous internet is a more commercially predictable internet, and that is worth a fortune. None of this is new for Meta. Facebook first launched with a real name policy and enforced it aggressively for years. The policy required users to register their “authentic identity,” and the company suspended accounts that used pseudonyms, stage names, or anything it deemed not a real name. In a 2015 Q&A, Zuckerberg defended the policy by asserting that it “helps keep people safe” because people are “much less likely to try to act abusively towards other members of the community if they have to stand behind everything they say.” The EFF, the ACLU, and other advocacy groups pushed back hard, documenting how the policy harmed domestic abuse survivors, political dissidents, and journalists working under pseudonyms for their own safety. The backlash eventually forced Facebook to make modest concessions. Meanwhile, Meta acquired Instagram in 2012, a platform that allowed pseudonymous handles and had no real name requirement, absorbing a user base that had grown precisely because it offered the flexibility Facebook did not. The real name policy remained a point of friction on Facebook itself, and Meta gradually softened its enforcement as the political cost of maintaining it grew. What age verification legislation offers Meta is something the company could not achieve on its own: the real name policy it always wanted, imposed by law, applied universally, and with the compliance cost shifted to someone else. Meta does not have to be the bad guy demanding your ID. The government does it. Apple and Google do it. Meta just receives the verified signal and reaps the commercial benefits. Zuckerberg tried to build a verified-identity platform through corporate policy and faced a public revolt. Now governments are building it for him and calling it child safety. Follow the Money Remember the story: politicians are fighting Big Tech. Starmer says, “Bring it on.” Cruz and Schatz say they’re holding companies accountable. Macron says children’s brains are not for sale. The framing depends on the idea that these laws are being imposed on resistant corporations. An open-source investigation published in March by the TBOTE Project traced the money behind age verification lobbying and found the opposite. Meta is not fighting these laws. Meta is the largest corporate force pushing for them. The investigation, which used IRS filings, Senate lobbying disclosures, state lobbying registrations, and campaign finance databases, documented that Meta spent a record $26.3 million on federal lobbying in 2025, more than Lockheed Martin or Boeing. The company deployed 86 lobbyists across 45 states. 85 percent of those lobbyists had prior government service. source: tboteproject.com The centerpiece of Meta’s lobbying is the App Store Accountability Act, which would require Apple and Google to verify user ages before anyone can download any app from their stores. Meta’s own Senate filings list the bill as a lobbied priority. The filing narrative includes “protecting children, bullying prevention and online safety; youth safety and federal parental approval; youth restrictions on social media.” The catch: the App Store Accountability Act imposes requirements on app stores and operating systems. It imposes no new requirements on social media platforms. If it becomes law, Apple and Google absorb the compliance cost, the infrastructure burden, and the regulatory liability. Meta’s apps face zero new mandates. The investigation also uncovered that Meta covertly funded a group called the Digital Childhood Alliance (DCA) to advocate for the legislation. Bloomberg exposed the funding relationship in July 2025. The DCA’s executive director, Casey Stefanski, admitted receiving tech company funding under oath at a Louisiana Senate committee hearing but refused to name donors. The DCA is registered as a 501(c)(4) in Delaware with a minimum-disclosure IRS filing showing gross receipts under $25,000 for its first tax year, despite coordinating legislative campaigns across more than 20 states. Its domain was registered on 18 December 2024. The website was live and fully operational the next day, 77 days before Utah’s SB-142 (the first App Store Accountability Act to become law) was signed. Almost every post on the DCA website targets Apple and Google. Meta is never criticized. Meta is not the only social media company backing this approach. Snap, X, and Pinterest have all confirmed support for App Store Accountability Act bills. Every confirmed supporter is a social media platform that benefits from moving age verification to the app store layer. Every confirmed opponent operates an app store that would bear the compliance burden. In Louisiana, a Meta lobbyist brought the legislative language for HB-570 directly to the bill’s sponsor, who confirmed this publicly. The bill passed 99-0. In California, Meta spent more than $1 million on direct lobbying in the first three quarters of 2025 alone. The company committed over $70 million to four state-level super PACs, including one in Texas whose stated policy priority uses language that mirrors the App Store Accountability Act exactly. The Heritage Foundation, which funds three of the six named DCA coalition organizations, staffs the pipeline from Capitol Hill to state legislatures and has merged leadership with another coalition member, Moms for Liberty, at the executive level. A former Senate staffer from Senator Mike Lee’s office (who introduced the federal version of the Act) moved to Heritage and then endorsed the DCA on its launch day. Meta hired a Heritage fellow in May 2024. The TBOTE investigation found the lobbying operation extends internationally. Meta spends €10 million annually on EU lobbying, the largest single company spend, and retains 18 or more consulting firms across jurisdictions, with at least three operating in both Brussels and Washington. The Real Alignment So here is the picture, once you strip away the posturing. Keir Starmer says he will fight the big social media companies. Meta spent a record $26.3 million on lobbying in 2025 to pass the very type of legislation Starmer is championing, and it covertly funded an advocacy group to do the grassroots work. Ted Cruz says he is holding Big Tech accountable. Meta’s lobbyists are in 45 states pushing bills that exempt social media platforms from the age verification requirements they impose on everyone else. Macron says children’s brains are not for sale. Meta spends €10 million a year on EU lobbying, the largest single company spend on the continent, working the same legislative channels Macron’s government is using. The politicians get a cause that supposedly polls above 90 percent approval. The tech companies get to move the cost and liability of age verification onto their competitors while exempting their own platforms. Everyone gets to say they’re protecting children. The only thing anyone actually has to give up is the ability to use the internet without showing ID. It’s the political equivalent of a boxing match where both fighters split the purse, only the prize is a national identity database sold as child safety. Consider what these bills collectively create. Australia’s law is already in force, with eSafety overseeing compliance across ten platforms and pushing industry codes that extend to internet service providers, hosting services, and search engines. The UK is launching trials with 300 teenagers and running a consultation that closes in May, with legislation expected to follow quickly. Apple has pre-emptively installed device-level identity verification on every UK iPhone. Starmer wants powers to restrict VPN use by children. California’s Digital Age Assurance Act will require users to enter their date of birth when setting up a new phone or computer, effective in 2027. Colorado is advancing a bill to require operating systems to collect and store user ages at device setup and expose that data to third-party apps via API. The Kids Online Safety Act carries broad definitions of content that is “harmful” to minors, a category the government gets to define and that the bill leaves subject to government influence. It also directs agencies to develop verification at the device or operating system level. New York’s SAFE for Kids Act permits facial analysis as an alternative to government ID submission, meaning biometric data collected to scroll a social media feed. These identity databases will be breached. A Discord-related breach last year exposed approximately 70,000 government-issued IDs submitted through a third-party system. Every ID check creates a future breach waiting to happen. Over 400 computer scientists signed an open letter arguing that these laws build surveillance architecture without meaningfully protecting children. The ACLU, the Center for Democracy and Technology, Fight for the Future, and the EFF wrote jointly to Congress that the legislation “would actively undermine child safety, harm marginalized youth, erode privacy, and impose unconstitutional restrictions on young people’s ability to engage online.” GrapheneOS, the privacy-focused Android fork, announced it will refuse to implement age data collection entirely. “GrapheneOS will remain usable by anyone around the world without requiring personal information, identification, or an account,” the project stated. “If GrapheneOS devices can’t be sold in a region due to their regulations, so be it.” That is what it costs to refuse. Who Loses Anonymous and pseudonymous speech online protects real people. Whistleblowers. Abuse survivors. Political dissidents. People exploring medical questions or ideas they are not ready to attach their legal names to. Journalists protecting sources. The stated goal of every age verification law is to protect 9-year-olds from Instagram. The mechanism is a national digital identity system baked into the operating systems that run the overwhelming majority of the world’s smartphones. The chilling effect is already visible. In the UK, image-hosting site Imgur blocked access to all UK users last year after tighter age verification rules, showing blank images instead. Some websites blocked UK users entirely rather than verify their age. The choice for smaller platforms, independent developers, and open-source projects is even starker: build verification systems they cannot afford, geoblock entire countries, or shut down, giving their Big Tech rivals more power. In Louisiana, 12 Meta lobbyists worked a single bill that passed 99-0. In the UK, Apple built verification infrastructure that the law does not even require, and the regulator applauded. In Los Angeles, the CEO of the company whose platform had 4 million underage users told a jury that the solution was to hand identity gatekeeping to two private companies already facing antitrust scrutiny. The politicians say they are fighting Big Tech. The lobbying disclosures say Big Tech is paying for the fight. The bills say everyone needs to show ID. And the age verification infrastructure, once installed, does not care whether you are nine or ninety. It just needs to know who you are. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post The Age Verification Con appeared first on Reclaim The Net.

European Parliament Rejects Mass “Chat Control” Surveillance by Single Vote
Favicon 
reclaimthenet.org

European Parliament Rejects Mass “Chat Control” Surveillance by Single Vote

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The European Parliament killed Chat Control on Thursday, rejecting the automated scanning of private photos and text messages by a single vote. One vote separated Europeans from continued mass surveillance of their private communications by American tech companies. After that razor-thin margin knocked out the most invasive provisions, the remaining proposal failed to reach a majority at all. The vote came after some forces in Parliament tried to force a repeat of a decision already made on March 13, when lawmakers had already rejected blanket scanning. The push for a re-vote was an attempt by the EPP to rewrite the outcome after negotiations failed, a maneuver that MEP Markéta Gregorová called “spitting in the face of their colleagues and citizens.” Starting 4 April, the EU derogation that allowed Meta, Google, and Microsoft to voluntarily scan every private message sent by European citizens expires permanently. The legal basis for warrantless bulk scanning of your people’s data disappears. What the surveillance regime actually did The expiring regulation, EU interim regulation 2021/1232, gave US corporations permission to read your messages at scale. Three types of scanning were authorized. Hash scanning matched images against databases of known illegal material and generated over 90% of reports. Automated AI assessment targeted images and videos that the algorithms hadn’t seen before. And text analysis trawled through private chat conversations looking for suspicious language. All of it happened without a warrant, without individual suspicion and without meaningful European oversight. The AI-based scanning of unknown images and texts was, by every technical measure, broken. A newly published study from researchers at KU Leuven and Ghent University delivered the technical confirmation. They reverse-engineered Microsoft’s PhotoDNA, the standard algorithm used by tech companies for Chat Control, and found fundamental weaknesses. Their verdict was damning. The software is “unreliable.” Criminals can make illegal images invisible to the scanner with minimal changes, like adding a simple border, while harmless images can be manipulated to falsely flag innocent users to the police. The most computationally demanding attacks take under ten minutes on a personal laptop. The numbers that buried Chat Control The EU Commission’s own 2025 evaluation report reads like a catalogue of failure. Germany’s Federal Criminal Police Office reported that 48% of all flagged chats were criminally irrelevant. Nearly half of everything this surveillance system surfaced was junk, private conversations between innocent people exposed to law enforcement for nothing. That flood of false reports consumed investigative resources that could have gone toward actual cases. Around 40% of investigations triggered in Germany targeted teenagers sharing images consensually. The system built to protect children was criminalizing them. And the whole apparatus was already collapsing under its own logic. As messaging services adopted end-to-end encryption, the number of reports dropped 50% since 2022. The Commission’s report found no measurable link between mass scanning and actual convictions. Years of warrantless surveillance of hundreds of millions of people, and the EU’s own data shows it didn’t work. In a statement to Reclaim The Net, Patrick Breyer, the former Pirate Party MEP who has fought Chat Control for years, called today’s result historic. “This historic day brings tears of joy! The EU Parliament has buried Chat Control – a massive, hard-fought victory for the unprecedented resistance of civil society and citizens! The fact that a single vote tipped the scales against the extremely error-prone text and image search shows: Every single vote in Parliament and every call from concerned citizens counted! “We have stopped a broken and illegal system. Once our investigators are no longer drowning in a flood of false and long-known suspicion reports from the US, resources will finally be freed up to hunt down organized abuse rings in a targeted and covert manner. Trying to protect children with mass surveillance is like desperately trying to mop up the floor while leaving the faucet running. We must finally turn off the tap! This means genuine child protection through a paradigm shift: Providers must technically prevent cybergrooming from the outset through secure app design. Illegal material on the internet must be proactively tracked down and deleted directly at the source. That is what truly protects children. “But beware, we can only celebrate briefly today: They will try again. The negotiations for a permanent Chat Control regulation are continuing under high pressure, and soon the planned age verification for messengers threatens to end anonymous communication on the internet. The fight for digital freedom must go on!” The next threat is already moving Today’s win is real but narrow. Trilogue negotiations on a permanent child protection regulation, the one digital rights groups call Chat Control 2.0, continue under severe time pressure. EU governments still want “voluntary” mass scanning, a label that functions as political cover for the same bulk surveillance the Parliament just rejected. And the next attack on digital privacy is already on the agenda. Lawmakers will soon negotiate whether messenger services and app stores must implement mandatory age verification. That means government ID uploads or facial scans before you can send a message. Anonymous communication, the kind that protects whistleblowers, journalists, dissidents, and anyone who simply doesn’t want to hand their identity to a tech company, would effectively cease to exist across the EU. The Parliament won this fight by a single vote. The surveillance apparatus that governments and the Commission have spent years building doesn’t dismantle itself because of one close call. It comes back, rebranded, repackaged, pushed through quieter procedural channels. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post European Parliament Rejects Mass “Chat Control” Surveillance by Single Vote appeared first on Reclaim The Net.

When Government Hacks Go Wandering
Favicon 
reclaimthenet.org

When Government Hacks Go Wandering

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post When Government Hacks Go Wandering appeared first on Reclaim The Net.

Supreme Court Blocks Recording Industry’s Push to Cut Millions Off the Internet Over Piracy Claims
Favicon 
reclaimthenet.org

Supreme Court Blocks Recording Industry’s Push to Cut Millions Off the Internet Over Piracy Claims

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The Supreme Court has unanimously refused to let the recording industry turn internet providers into copyright enforcers with the power to cut millions of people off from modern life. The ruling, handed down Wednesday in Cox Communications v. Sony Music Entertainment, kills a legal theory that would have given ISPs one rational choice when they received a copyright complaint: sever the connection first, figure out the truth later. We obtained a copy of the order for you here. All nine justices thankfully agreed that Cox Communications bears no liability for the piracy of its subscribers. Justice Clarence Thomas, writing for the majority, stated: “Under our precedents, a company is not liable as a copyright infringer for merely providing a service to the general public with knowledge that it will be used by some to infringe copyrights.” The real significance of the ruling isn’t what it means for Cox’s balance sheet. It’s what prevents it from happening to everyone who depends on an internet connection to live, work, and communicate, which in 2026 is functionally all of us. Consider what the recording industry was actually proposing. Sony Music Entertainment and more than 50 other labels, representing artists like Sabrina Carpenter, Givēon, and Doechii, wanted courts to hold ISPs financially liable for not disconnecting users accused of piracy. Not convicted. Accused. The accusations themselves came from an automated system paid for by the Recording Industry Association of America, which hires an anti-piracy company to blast notices at internet providers whenever its software detects possible infringement. Nobody reviews these notices with any care. Nobody checks whether the flagged activity was actually illegal, whether it fell under fair use, or whether the person named on the account was even the one responsible. Under the legal standard the labels wanted, an ISP that received enough of these automated complaints and didn’t disconnect the account could face catastrophic financial liability. A Virginia jury bought that theory in 2019 and hit Cox with a verdict of over $1 billion. The incentive structure that kind of liability creates is terrifying if you think it through for more than a few seconds. When an ISP faces billion-dollar exposure for not cutting people off, the only financially rational response is to start cutting people off aggressively. Cox, the largest private broadband provider in the country with more than six million homes and businesses on its network, tried to explain the human consequences. Its lawyers argued that disconnecting service “after receiving automated notices accusing an unknown user at a home or business” of infringement would force the company to kill internet access at entire locations based on a “bare accusation” against a single user. A family of five loses their connection because one teenager allegedly downloaded a song. A hospital serving hundreds of patients and their families goes dark because someone on the guest Wi-Fi triggered an automated flag. A university campus gets throttled or disconnected because students were doing what students have always done. “That notion turns Internet providers into Internet police and jeopardizes Internet access for millions of users,” Cox told the Court. The ACLU put the stakes in plain terms: “Parents’ Internet access…may be terminated based on the conduct of their children – over even their children’s friends. A hospital that offers internet access to dozens or even hundreds of patients and their families could find critical access shut off.” That’s what the Fourth Circuit’s legal standard actively encouraged. The appeals court had upheld Cox’s contributory liability while tossing the $1 billion damages figure, creating a rule that treated an ISP’s failure to disconnect accused infringers as grounds for massive financial exposure. If the Supreme Court had let that stand, every broadband provider in America would have been weighing the cost of a lawsuit against the cost of disconnecting a customer. The customer would lose that calculation every time. And this would have landed hardest on the people with the least power to fight back. A corporate law firm with a dedicated IT department and legal counsel can challenge a wrongful disconnection. A single parent relying on one broadband connection for remote work, their kids’ schooling, telehealth appointments, and every other essential function that now runs through the internet cannot. The recording industry’s preferred enforcement model would have created a system where the most vulnerable internet users bore the greatest risk of losing access based on the flimsiest evidence. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Supreme Court Blocks Recording Industry’s Push to Cut Millions Off the Internet Over Piracy Claims appeared first on Reclaim The Net.

Trump Backs FISA Section 702 Extension, Drops Privacy Reform
Favicon 
reclaimthenet.org

Trump Backs FISA Section 702 Extension, Drops Privacy Reform

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Two years ago, President Donald Trump told Congress to “KILL FISA.” On Wednesday, he asked them to keep it alive for another 18 months, no changes needed. The president posted on Truth Social, urging lawmakers to pass a clean extension of Section 702 of the Foreign Intelligence Surveillance Act, the provision that lets US spy agencies intercept the communications of foreigners abroad without a warrant but has many times been used, directly or indirectly, to gather data on Americans. It means no new privacy protections. No warrant requirement for searching Americans’ data. No closing of the loopholes that let intelligence agencies buy your browsing history from data brokers instead of getting a judge’s approval. Trump framed the ask around the ongoing military operations in Iran. “With the ongoing successful Military activities against the Terrorist Iranian Regime, it is more important than ever that we remain vigilant, PROTECT our Homeland, Troops, and Diplomats stationed abroad, and maintain our ability to quickly stop bad actors seeking to cause harm to our People and our Country,” he wrote. “The fact is, whether you like FISA or not, it is extremely important to our Military,” he added. “I have spoken to many Generals about this, and they consider it vital.” The details of what Section 702 actually does tend to get buried under urgency. The provision nominally targets foreigners overseas, but the collection process vacuums up American communications too, every time a US citizen emails, texts, or calls someone abroad. Those intercepted messages sit in classified databases. FBI agents can then search that data using Americans’ names, phone numbers, and email addresses, all without a warrant. The FBI ran those warrantless searches more than 278,000 times in a single year, according to the Foreign Intelligence Surveillance Court. The agency’s own inspector general found searches targeting peaceful protesters, sitting lawmakers, congressional staff, thousands of campaign donors, journalists, and at least one judge. The 2024 reauthorization bill, known as RISAA, introduced some changes to search procedures. It did not add a warrant requirement. A House amendment that would have required one failed in a 212-212 tie, the thinnest possible margin. Speaker Mike Johnson broke that tie against the warrant. Now Johnson is back, pushing the same approach. He told reporters Wednesday that the US does not “have the abuses that we had before,” and that “FISA as currently constituted, as we amended with 56 major reforms, is working as desired, and we do not have the abuses we did before.” The claim that 56 reforms solved the problem deserves scrutiny. Those reforms limited the number of agents who can search the database and required supervisor approval before reviewing information on Americans. They did not require judicial oversight. A supervisor’s sign-off is not a warrant. An internal checklist is not the Fourth Amendment. Trump’s pivot on FISA has pulled some notable Republicans with him. House Judiciary Chair Jim Jordan, who voted against reauthorization in 2024 specifically because the warrant amendment failed, reversed last week. He now supports the extension, calling it “a whole different context today.” The context he means is the Iran conflict, not any change to how the surveillance system treats Americans’ data. President Trump himself acknowledged his own history as a surveillance target, noting that his 2016 campaign was spied on using a different FISA authority. He said his administration has “worked tirelessly to ensure these reforms are being aggressively executed at every level.” He had called for the extension to preserve the “Critical and Common Sense Reforms that were made in the last Reauthorization of FISA,” writing: “When used properly, FISA is an effective tool to keep Americans safe. For these reasons, I have called for a clean 18-month extension, HOWEVER, the Critical and Common Sense Reforms that were made in the last Reauthorization of FISA must remain intact to protect the American People from abuses. Nobody understands this better than me.” The bill still faces resistance. Section 702 expires April 20, and Johnson was forced to delay the expected vote to mid-April after hard-line Republicans refused to fall in line. Rep. Keith Self has called the warrantless surveillance of US citizens a fundamental privacy issue. Rep. Anna Paulina Luna has pledged to oppose the legislation unless Congress attaches the SAVE America Act, a voter ID bill, creating a separate legislative standoff within the surveillance fight. Rep. Lauren Boebert put it most simply. While there were 56 reforms last year, she wants to see 57. “I want warrants,” she said Wednesday. That demand, a warrant before the government searches Americans’ private communications, came one vote from passing in 2024. Three reform bills currently sit before Congress: the SAFE Act, PLEWSA, and GSRA. Each would add a warrant requirement for queries involving Americans’ data. The clean extension ignores all three. The pattern repeats every time Section 702 comes up for renewal. Intelligence officials warn that any reform will create dangerous gaps. Lawmakers who promised to fight for warrants find reasons to wait. The deadline pressure makes “just extend it” sound reasonable. And the warrant requirement, the single reform that would bring this surveillance program in line with the Fourth Amendment’s basic protections against unreasonable search, gets pushed to the next cycle. Congress built the two-year sunset into Section 702 specifically, so lawmakers would have regular opportunities to add meaningful protections. Instead, those deadlines have become opportunities to extend mass surveillance with fewer questions asked each time. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Trump Backs FISA Section 702 Extension, Drops Privacy Reform appeared first on Reclaim The Net.