Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

EU Admits X’s Open Data Skews Disinformation Findings While Fining Platform for Restricting Researchers
Favicon 
reclaimthenet.org

EU Admits X’s Open Data Skews Disinformation Findings While Fining Platform for Restricting Researchers

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The EU’s own diplomatic service has published a report admitting that X makes its data more accessible to researchers than other major platforms, and then used that admission to brand X the primary channel of “foreign information manipulation and interference” against the bloc. The European External Action Service (EEAS) put this in writing. The media ran with the conclusion and buried the caveat. The fourth annual FIMI Threats report, released this month, found that “88% of instances were concentrated on the platform X. The presence of CIB networks, the ease of creation of fabricated accounts, but also more straightforward access to data, explains this concentration. Most of the major social media platforms restrict access to data that would allow for assessing the magnitude of information manipulation activities.” Read that again. The EEAS is telling you that X appears dominant in its findings partly because X lets researchers see what’s happening, while other platforms don’t. Facebook, TikTok, Instagram, YouTube: their data is locked down. So the manipulation happening there goes unmeasured. X gets flagged precisely because it’s more open. That context was largely absent from the headlines that followed. Polskie Radio ran with “Social network X is the main channel of disinformation against the EU and politicians are the biggest targets.” Plataforma Media went with “X (Twitter) main disinformation channel against EU.” Neither headline mentioned that the EU’s own analysts acknowledged a significant part of this concentration reflects X’s comparatively open data environment, not just the actual prevalence of manipulation on the platform. The timing makes this worse. Three months before the FIMI report landed, the European Commission fined X €120 million under the Digital Services Act. One of the three violations cited was the failure to provide access to public data for researchers. X’s terms of service prohibit eligible researchers from independently accessing its public data, including through scraping. What’s more, X’s processes for researchers’ access to public data impose unnecessary barriers, effectively undermining research into several systemic risks in the European Union. So the EU fined X for restricting researcher access to data. Then the EEAS published a report crediting X’s comparatively open data access as a reason it dominates the FIMI numbers. Both things happened. Neither position was retracted, and the Commission’s fine remains on the books. The contradiction gets sharper when you look at what was happening in Germany around the same time. Two NGOs, Democracy Reporting International (DRI) and the Society for Civil Rights (GFF), sued X under the DSA for refusing to hand over data ahead of Germany’s February 2025 federal election. “Other platforms have granted us access to systematically track public debates on their platforms, but X has refused to do so,” said Michael Meyer-Resende of DRI. A Berlin court sided with the NGOs and ordered X to comply. The funding behind that lawsuit is worth noting. DRI’s largest single funder is the European Commission itself, which provided €5.7 million in 2023 alone. The same institution that fined X €40 million for DSA non-compliance is also the primary financial backer of the group that just won a court order forcing X to comply with the DSA. GFF’s funding trail has its own texture. The Mozilla Foundation granted money to GFF specifically to support “enforcement of research data access based on the DSA,” the precise legal mechanism at the center of this lawsuit. Mozilla’s revenue comes overwhelmingly from Google, via a search engine deal. DuckDuckGo also appears on GFF’s donor list. The same pattern repeated in February this year. A Berlin court ordered X to hand over data on Hungarian election activity to researchers, again ruling in favor of DRI after X refused. Hungary votes in April. X’s performance in this area was serious enough to be the basis of the European Commission’s fine decision for €120 million, which found that X only accepts 4.7 percent of the data access requests it receives. That’s the Commission’s own figure. Most formal research requests to X get rejected. And yet, according to the EEAS, the platform still provides “more straightforward access to data” than its competitors. Which means the others are offering even less. The platforms that accept close to zero research requests are shielded from FIMI statistics entirely. Their manipulation problems don’t show up in the numbers because researchers can’t get at the data to find them. The FIMI report covered 540 incidents detected throughout 2025. The EEAS is careful to note that identified trends should not be interpreted as exhaustive, as the analysis remains shaped by the focus and scope of monitoring efforts. That disclaimer appears in the small print. The headline number, 88% on X, does not come with it. What the EU has built here is a measurement system that rewards opacity. Platforms that restrict data access don’t show up in the statistics. They’re not transparent enough to be monitored. X, which at least allows more data to flow than the alternatives, becomes the visible target. More visibility equals more accountability equals more blame. Close your data off and disappear from the count. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post EU Admits X’s Open Data Skews Disinformation Findings While Fining Platform for Restricting Researchers appeared first on Reclaim The Net.

White House AI Framework Pushes Age Verification ID Mandate
Favicon 
reclaimthenet.org

White House AI Framework Pushes Age Verification ID Mandate

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The White House has published a National AI Legislative Framework, a set of recommendations to Congress intended to govern artificial intelligence with a single uniform standard rather than, as the document puts it, “a patchwork of conflicting state laws.” The administration wants federal law to preempt the states. That part is straightforward. What the framework actually proposes is less straightforward. Alongside a genuine free speech provision, the document contains age verification mandates, chat surveillance requirements, national security carve-outs that would tighten the relationship between AI companies and federal intelligence agencies, and an expansion of the TAKE IT DOWN Act, a law that we have already flagged for lacking adequate safeguards against censorship. The White House is presenting all of this as part of the same coherent package. Start with the child protection section: Congress should establish “commercially reasonable, privacy protective, age-assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” Age verification on AI platforms. The framework calls these requirements “privacy protective.”  They are not. There is no version of meaningful age verification that doesn’t require collecting sensitive personal data, and there is no version of collecting sensitive personal data at scale that isn’t a breach waiting to happen. The only tools platforms have are identity-based checks, government IDs, biometric scans, credit card data, and third-party verification services, or biometric estimation. The only way to prove that someone is old enough to use a site is to collect personal data about who they are. In October 2025, Discord identified 70,000 users globally who potentially had their photo IDs exposed to hackers. Discord said the data was accessed through a third-party service provider. Discord’s own support pages had said it did “not permanently store personal identity documents or your video selfies,” and that images of identity documents were “deleted directly after your age group is confirmed.” 70,000 government IDs leaked anyway. The promise of deletion and the reality of third-party data handling are different things. The Tea and Discord breaches highlight regulators’ inability to prevent data retention or enforce data deletion in practice. That’s one breach. In 2024, Australia greenlit an age verification pilot, and hours later, a mandated verification database for bars was breached. That same year, another ID verification service was breached, exposing private info collected on behalf of Uber, TikTok, and more. The identity verification company AU10TIX left login credentials exposed online for more than a year, allowing access to data including users’ names, dates of birth, nationality, identification numbers, and the type of document uploaded, such as a driver’s license, along with images of those documents. This keeps happening because it has to keep happening. It’s the inevitable result of a system designed to aggregate the exact kind of data that attackers most want to steal. The problem compounds when third parties are involved, which they always are. A platform doesn’t run its own verification infrastructure. It contracts it out. Under these laws, users would not just momentarily display their ID like one does when accessing a liquor store. Instead, they’d submit their ID to third-party companies, raising major concerns over who receives, stores, and controls that data. Each additional company in the chain is another breach target, another entity that may retain data beyond its stated policy, another entity potentially beyond the reach of US enforcement. Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. Each retained record becomes a potential breach target. Scale that experience across millions of users, and you bake the privacy risk into how platforms work. There’s also the chilling effect that age verification creates before anyone’s data leaks. Anonymous and pseudonymous speech has always been part of how people participate in political life online. Many of the world’s internet users live in countries where people have been arrested or imprisoned for posting content about political or social issues, and that number is actually increasing as European countries and the UK join those ranks. In environments like these, there is considerable risk in connecting a person’s online activities to a photo of their face or their identification card. The US isn’t typically one those countries. But the infrastructure being built here gets exported, copied, and adapted. The choice to create centralized identity databases for platform access is a choice about what the global internet looks like, not just domestic policy. The framework’s “privacy protective” framing doesn’t engage with any of this. It uses the phrase to describe requirements it knows will force platforms to collect government-issued identification or biometric data from every adult user, route that data through third-party vendors, and retain enough of it to prove compliance to regulators. The same section requires AI platforms likely to reach minors to “implement features that reduce the risks of sexual exploitation and self-harm to minors.” That sounds reasonable until you ask how an AI platform is supposed to detect self-harm content in real time across millions of users. The answer is mass scanning of user conversations. The framework doesn’t say “mass surveillance.” It says “implement features.” The effect is the same. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post White House AI Framework Pushes Age Verification ID Mandate appeared first on Reclaim The Net.

Angela Lipps Spent 108 Days in Jail Because a Facial Recognition Algorithm Was Wrong
Favicon 
reclaimthenet.org

Angela Lipps Spent 108 Days in Jail Because a Facial Recognition Algorithm Was Wrong

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Angela Lipps spent nearly six months in jail because an algorithm looked at surveillance footage and decided she matched the suspect. She had never been to North Dakota. She had never been on a plane. A facial recognition system said otherwise, and police took that as enough. Lipps, a 50-year-old mother and grandmother from north-central Tennessee, was arrested at her home in July while babysitting four children. US marshals arrived with guns drawn. She was booked as a fugitive from justice. “I’ve never been to North Dakota, I don’t know anyone from North Dakota,” she told WDAY News. The case began with bank fraud in Fargo. Between April and May 2025, someone used a fake US Army military ID to withdraw tens of thousands of dollars from banks across the city. Detectives pulled surveillance footage of a woman at the counters. They fed that footage into facial recognition software. The software returned a name: Angela Lipps. A detective wrote in court documents that Lipps appeared to match the suspect based on facial features, body type, and hairstyle. That assessment, made by software and rubber-stamped in a report, was treated as sufficient cause for arrest. Nobody from the Fargo police called Lipps before the marshals showed up at her door. She sat in a Tennessee county jail for 108 days waiting for North Dakota to arrange her transport. No bail. Four counts of unauthorized use of personal identifying information. Four counts of theft. The algorithm had spoken. Her attorney, Jay Greenwood, told InForum: “If the only thing you have is facial recognition, I might want to dig a little deeper.” Fargo police did not dig deeper. What eventually cleared Lipps was her bank records, which showed she had been more than 1,200 miles away in Tennessee during every transaction investigators said she committed in North Dakota. Greenwood obtained those records and brought them to the investigators. Lipps was released on Christmas Eve. The story didn’t end there. While locked up and unable to pay bills, Lipps lost her home, her car, and her dog. When Fargo police released her, they didn’t arrange her trip back to Tennessee. Defense attorneys helped cover a hotel room and food over Christmas. A local nonprofit, the F5 Project, got her home. As of the reporting from InForum, nobody from the Fargo police department had apologized. This is how facial recognition operates: it generates a match, law enforcement acts on it, and the burden of disproving a computer’s guess falls entirely on the person whose life gets upended. Lipps had to produce documentary evidence of her own location to escape charges based on software that was simply wrong. The Lipps case is not unusual. Last October, an AI system at a Baltimore school identified a bag of Doritos as a firearm and notified police. Officers arrived armed at Kenwood High School, forced student Taki Allen to his knees, handcuffed him, and searched him. They found nothing. In the UK, Shaun Thompson, 39, had just finished a volunteer shift with Street Fathers, a group dedicated to steering young people away from knife crime, when the Metropolitan Police’s live facial recognition cameras flagged him outside London Bridge station. Officers detained him for nearly half an hour, demanded his fingerprints, and threatened arrest, even as he produced multiple forms of ID proving he wasn’t the person they were looking for. “They were telling me I was a wanted man, trying to get my fingerprints and trying to scare me with arrest, even though I knew and they knew the computer had got it wrong,” he said. Thompson is now bringing the first legal challenge of its kind against the Metropolitan Police’s use of live facial recognition. The man the algorithm flagged as a criminal was spending his evening trying to prevent crime. The technology made no distinction. What these cases share is a common architecture. A system makes an identification, human oversight treats that identification as reliable, and the person flagged has no recourse until significant damage has already been done. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Angela Lipps Spent 108 Days in Jail Because a Facial Recognition Algorithm Was Wrong appeared first on Reclaim The Net.

EU Proposal Links European Business Registration to Digital ID Wallets
Favicon 
reclaimthenet.org

EU Proposal Links European Business Registration to Digital ID Wallets

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The European Commission just proposed attaching a verified digital identity to every company operating across the EU. The framework, called EU Inc., pairs a new pan-European corporate legal structure with something called the EU Business Wallet: a credential holding a company’s identity, ownership structure, and legal status, shareable with public authorities across all 27 member states on request. The pitch is speed and simplicity. Under EU Inc., businesses can register anywhere in the EU within 48 hours for a maximum fee of €100. The Commission wants the European Parliament and Council to agree on the proposal by the end of 2026, with the full single market vision operational by 2028. Commission President Ursula von der Leyen framed it as the opening move in something larger: “This crucial step is just the beginning. Our goal is clear: one Europe, one market, by 2028.” https://video.reclaimthenet.org/articles/528651283068421146.mp4 That goal has a data architecture attached to it that is a plan for something bigger. The EU Business Wallet isn’t a filing cabinet for company documents. It creates a standardized, machine-readable identity layer for businesses, one that links verified corporate credentials to the individuals authorized to act on the company’s behalf. The wallet builds on eIDAS 2, the regulation already requiring all 27 member states to provide digital identity wallets to citizens by the end of 2026. The corporate credential and the individual credential tie together. A company’s legal structure, its beneficial owners, and the people signing on its behalf all become traceable through a chain of verified, shareable credentials. The EU Business Wallet was first announced in November 2025 and isn’t yet fully operational. The justification is administrative efficiency and anti-fraud. Cross-border registration harmonized across national registers. Compliance paperwork that stops getting lost at borders. These are real problems. The question, as always with identity infrastructure at this scale, is what gets built in solving them. The playbook is recognizable. Governments that have struggled to sell digital identity schemes directly to citizens are finding a more compliant entry point: business. Make it a compliance requirement, frame it as anti-fraud, attach it to something people already have to do, and watch enrollment climb. By the time anyone objects at scale, the infrastructure is built, and the identifiers are issued. The UK ran this experiment first, and recently, with company directors. From 18 November 2025, all directors or persons with significant control of a UK-registered company will be legally required to complete a digital identity verification check with Companies House. The government estimates that 6 to 7 million individuals will need to verify their identity by mid-November 2026. The verification routes through GOV.UK One Login. Without a verified personal code, directors cannot file confirmation statements, appoint other directors, or update any company records. The system won’t let them proceed. A flaw introduced during a Companies House system update in October 2025 left directors’ residential addresses, dates of birth, and email addresses potentially visible to other logged-in users for five months, the same five months the agency was enrolling millions of those directors into its mandatory identity verification system. Companies House insists this doesn’t constitute a digital ID. The claim is technically careful and substantively hollow. A situation where directors must upload their passport or driving licence, and receive a persistent digital code linking them to their company record, is a state-controlled digital ID by any common-sense definition. The British public had already noticed: a parliamentary petition against mandatory digital ID gathered almost 3 million signatures. That opposition formed around a separate national ID announcement, but the Companies House rollout was moving in parallel, mostly below the noise threshold. By the time the petition was making headlines, the director verification system had been running for weeks. The justification in the UK was anti-fraud, specifically the Economic Crime and Corporate Transparency Act 2023, targeting shell companies and money laundering. Fraud reduction doesn’t require universal enrollment of everyone who runs a business, from the director of a logistics company to the trustee of a local charity. Universal enrollment does. HMRC has since joined GOV.UK One Login, bringing the number of government services accessible through the platform to over 200. The Companies House requirement didn’t create this system. It populated it. Millions of verified identities, enrolled under a compliance obligation, now form the foundation for whatever One Login becomes next. The EU Inc. proposal follows the same structural logic at the continental scale. The scheme is technically optional. Companies don’t have to register under EU Inc. or use the wallet. For now. What isn’t optional is the compliance obligation that makes the verification necessary to access the benefits. Once every public authority across 27 member states is set up to accept and request wallet-based credentials, the calculus for any business wanting smooth access to EU markets shifts considerably. Optional frameworks have a way of becoming the path of least resistance, then simply the path. What’s being built, in both cases, is persistent verified identity at scale, enrolled through commercial obligation rather than civic choice. The UK version covers millions of company directors. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post EU Proposal Links European Business Registration to Digital ID Wallets appeared first on Reclaim The Net.

Linux Distros Are Mounting a Response to the Age Verification ID Laws Coming for Your OS
Favicon 
reclaimthenet.org

Linux Distros Are Mounting a Response to the Age Verification ID Laws Coming for Your OS

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post Linux Distros Are Mounting a Response to the Age Verification ID Laws Coming for Your OS appeared first on Reclaim The Net.