Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Ohio’s Second Attempt at Adult Website Age ID Verification Advances
Favicon 
reclaimthenet.org

Ohio’s Second Attempt at Adult Website Age ID Verification Advances

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Ohio is taking a second shot at forcing adult websites to verify users’ ages, and this time the legislature is trying to close the legal escape route that let adult websites and others walk away from the first attempt. The Innocence Act, House Bill 84, passed the Ohio House on March 18 and moved to the Senate the following day. We obtained a copy of the bill for you here. The bill requires any company that “sells, delivers, furnishes, disseminates, provides, exhibits, or presents any material or performance that is obscene or harmful to juveniles on the internet” to deploy age verification. There are no carve-outs for platforms that host third-party content. That shelter is exactly what Aylo, Pornhub’s parent company, claimed under Ohio’s original age verification law. Section 230 of the Communications Decency Act shields platforms from liability for content posted by their users, and Aylo argued that hosting user-generated content made it an “interactive computer service” under that definition, exempting it from Ohio’s age-gating requirements. The argument worked. The original law’s language mirrored the federal statute closely enough that Aylo and other adult platforms successfully sidestepped enforcement entirely. HB 84 rewrites those definitions to cut off that route. It also replaces the criminal penalties from an earlier version of the bill, which included misdemeanor charges for minors who bypassed content blocks, with civil fines reaching $100,000 per day for noncompliance. Enforcement falls to Ohio Attorney General Dave Yost, whose office worked with Republican state Reps. Steve Demetriou and Josh Williams on the bill’s drafting. The measure passed the House Technology and Innovation Committee unanimously before advancing to a floor vote, and a path to Governor Mike DeWine’s signature looks clear. The age verification these laws require is worth examining directly. To access legal content as an adult, users must submit identity documents, biometric data, or other credentials to platforms or third-party verification services. That data then exists somewhere, held by someone, subject to breach, subpoena, and uses that weren’t disclosed at the point of collection. The stated goal is to protect children. The actual mechanism is building a database of adults who watch pornography, linked to a verifiable identity. Demetriou introduced an earlier Innocence Act version that imposed criminal penalties on minors who circumvented age blocks, a provision that treated teenagers as criminals for doing what teenagers do online. That’s gone from HB 84. What remains is the identity verification infrastructure itself, framed as child protection while functioning as a surveillance requirement for adult content consumption. Ohio isn’t alone in pursuing this, but it is among the states most determined to make it work regardless of the legal obstacles that keep appearing in the way. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Ohio’s Second Attempt at Adult Website Age ID Verification Advances appeared first on Reclaim The Net.

GrapheneOS Defies Age Verification Surveillance Laws, Vowing to Protect User Privacy Worldwide
Favicon 
reclaimthenet.org

GrapheneOS Defies Age Verification Surveillance Laws, Vowing to Protect User Privacy Worldwide

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. GrapheneOS has a simple answer to the wave of age verification laws moving through US state legislatures and already live in Brazil: no. The privacy-focused Android fork announced last Friday that it won’t implement the age data collection these laws demand. “GrapheneOS will remain usable by anyone around the world without requiring personal information, identification, or an account,” the project stated. “If GrapheneOS devices can’t be sold in a region due to their regulations, so be it.” That’s a blunter response than most OS developers are willing to give, and it’s worth understanding what it’s actually refusing. More: An Introduction to GrapheneOS Brazil’s Digital ECA (Law 15.211) came into force on March 17, hitting OS providers with fines of up to R$50 million, roughly $9.5 million per violation, for failing to build age verification into device setup. California’s Digital Age Assurance Act, AB-1043, signed by Governor Newsom in October 2025 and effective January 1, 2027, goes further: it requires every OS provider to collect a user’s age or date of birth during account setup, then push that data to app stores and developers through a real-time API. Colorado’s SB26-051 cleared the state senate on March 3 with similar demands. The architecture these laws collectively envision is an age-linked identity layer baked into the operating system itself, present before you’ve opened a single app. GrapheneOS is developed by the GrapheneOS Foundation, a registered Canadian nonprofit. California’s AB-1043 carries civil penalties of up to $2,500 per affected child for negligent violations and $7,500 for intentional ones, enforced by the state attorney general. The Canadian nonprofit status provides some distance but not a guarantee. The stakes grew more concrete after GrapheneOS and Motorola announced a partnership at MWC on March 2, bringing the hardened OS to future Motorola hardware and ending GrapheneOS’s long exclusivity to Google Pixel devices. A GrapheneOS-powered Motorola phone is expected in 2027. Once a major hardware manufacturer ships devices with GrapheneOS pre-installed, those products need to comply with local regulations in every market where they’re sold, or Motorola will have to restrict sales geographically. The defiant stance that’s easy for a nonprofit software project becomes a commercial problem for a global device manufacturer. GrapheneOS isn’t alone in refusing. The developers of DB48X, an open-source calculator firmware, recently issued a legal notice stating their software “does not, cannot, and will not implement age verification.” MidnightBSD went further by updating its license to block users in Brazil entirely. The refusals are coming from projects that share a core belief: building government-mandated surveillance infrastructure into software is worse than losing market access. Over 400 computer scientists signed an open letter arguing that these laws build surveillance architecture without meaningfully protecting children. The real output is a mandatory data pipeline connecting OS providers, app stores, and developers, with real-time ID signals tied to device setup and no clear answer to what that infrastructure gets used for in the future. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post GrapheneOS Defies Age Verification Surveillance Laws, Vowing to Protect User Privacy Worldwide appeared first on Reclaim The Net.

FC Barcelona Fined for Privacy Violations Over Biometric Data Collection
Favicon 
reclaimthenet.org

FC Barcelona Fined for Privacy Violations Over Biometric Data Collection

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. FC Barcelona got fined €500,000 ($579,219) for scanning the faces and recording the voices of over 100,000 members without doing the legal homework first. Spain’s data protection authority, the AEPD, found the club had deployed biometric identity verification during a membership census update and processed all of it without a valid Data Protection Impact Assessment. Members renewing their details remotely were required to either submit a facial scan through their device camera or record their voice. Both systems were live, both were processing biometric data at scale, and the documentation Barcelona produced to justify any of it didn’t meet the bar GDPR sets for high-risk processing. Article 35 of the GDPR requires organizations to conduct a DPIA before deploying any system likely to create a high risk for individuals. Biometric data used for identification qualifies automatically. Processing that touches more than 100,000 people, including minors, qualifies. Using new technologies qualifies. Barcelona’s system hit all three. The AEPD concluded the club’s documentation was missing the essential components of a genuine assessment: no real necessity and proportionality analysis, no adequate evaluation of what the processing actually risks for the people whose faces and voices it captured. The AEPD’s decision in case PS-00450-2024 makes one point with particular clarity: consent doesn’t substitute for a DPIA. Barcelona had asked members to agree to biometric data collection, and members had agreed. That agreement is legally irrelevant to the separate procedural obligation to assess risk before the system goes live. The GDPR treats them as independent requirements. Satisfying one doesn’t discharge the other. What a valid DPIA actually requires, according to the decision, is a clear description of the processing, a genuine necessity and proportionality assessment, a detailed risk evaluation, proposed mitigation measures, and a residual risk assessment after mitigations are applied. Organizations that generate DPIA documentation as a compliance checkbox, without substantively working through those questions, remain exposed regardless of what consent language they put in front of users. The appetite for facial biometric data has become near-universal across industries, and the Barcelona case lands in a moment when that appetite is accelerating faster than the rules meant to govern it. Banks deploy facial recognition for customer onboarding. Retailers use it for age verification at the point of sale. Hospitals scan patients at check-in. Stadiums have replaced tickets with face scans. The framing is always convenience, security, or safety. But organizations across every sector are building permanent biometric records of the people they serve, often without seriously asking whether they need to. Facial recognition now accounts for nearly 30% of biometric authentication usage among American users, with roughly 131 million daily interactions processed across the country. More than half the US population engages with recognition systems regularly. That infrastructure touches the daily lives of hundreds of millions of people, most of whom have little meaningful understanding of what’s being captured or where it goes. The fundamental problem with all of this is one that organizations consistently downplay: facial biometrics cannot be changed if compromised, creating a permanent vulnerability that persists throughout an individual’s lifetime. Unlike passwords, credit cards, or even social security numbers, facial features represent permanent identifiers that cannot be reset or replaced. When a company stores your face geometry and gets breached, you don’t get to change your face. The exposure is for life. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post FC Barcelona Fined for Privacy Violations Over Biometric Data Collection appeared first on Reclaim The Net.

Canada’s Public Safety Minister Defends Mass Surveillance Bill
Favicon 
reclaimthenet.org

Canada’s Public Safety Minister Defends Mass Surveillance Bill

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Canada’s Public Safety Minister, Gary Anandasangaree, wants you to know that Bill C-22 is not a surveillance bill. He said so twice. “I want to be very clear about what C-22 is not. It is not about the surveillance of honest, hard-working Canadians going on about their daily lives,” Anandasangaree told an audience that included police chiefs and law enforcement officials. Then, a few sentences later: “We’re not looking for sneaky ways to surveil Canadians. We are doing our part to combat bad actors in both the physical and digital worlds.” What he described is a surveillance bill. The Lawful Access Act, introduced this month, compels electronic service providers to retain Canadians’ metadata for a year and gives police and CSIS new mechanisms to access it. That includes location data, device identifiers, and daily movement patterns, all stored in advance, on every Canadian, not just suspects, held ready for law enforcement retrieval. https://video.reclaimthenet.org/articles/anandasangaree-defends-mass-surveillance-bill.mp4 The minister’s framing works by narrowing the definition of surveillance to something sinister-sounding, then positioning C-22 outside it. But mandatory data retention doesn’t need to be sneaky. It just needs to be mandatory. Location data, even without message content, tells a detailed story. Where someone sleeps, which doctor they visit, which protests they attend, and which religious services they go to. All of that sits in private company servers for twelve months, organized and catalogued for law enforcement use, because the government decided it might be useful someday. The bill does pull back from its predecessor. Bill C-2, which stalled after widespread opposition from rights groups, opposition parties, and the tech industry, would have let police ask any service provider, including those bound by professional privilege, whether someone was a client and where they connected from, all without a warrant. C-22 limits warrantless inquiries to telecommunications companies only, restricting the initial question to a simple yes-or-no on client status. Further information requires a warrant. Anandasangaree acknowledged the retreat. “One thing I’ve learned is that at times when more work needs to be done on a particular bill, you retreat and you come back. You come back with better consensus, better consultation, and better supports from across the board,” he said. What didn’t change is the central mechanism. Companies must warehouse sensitive data on every Canadian citizen on behalf of the state. The narrower scope is a concession. The underlying premise, that private communications infrastructure should be pre-organized for law enforcement convenience, remains intact. The bill’s most concerning section authorizes the Minister of Public Safety to issue secret orders compelling designated “core” electronic service providers, a category the government hasn’t fully defined, to build and maintain surveillance capabilities. Companies that receive these orders cannot disclose them. The government included a restriction: these capabilities cannot introduce systemic vulnerabilities or weaken encryption. That’s a real limit. It’s also written by the same government issuing the secret orders, with no public accountability for how it’s applied. C-22 also creates a new warrant mechanism for data held by foreign companies, mostly American tech giants. A Canadian judge issues a production warrant that doesn’t legally bind the foreign company but gives it legal cover to hand over data voluntarily. Whether companies cooperate is entirely their choice. It’s a workaround, not enforceable access, and its usefulness depends on corporate goodwill. The minister’s pitch leaned on the genuine problem that modern criminals do use digital tools, and that laws written before smartphones are genuinely inadequate. “Our laws are stuck in a century while technology has essentially moved forward,” he said. “Every desktop computer and every technology out there have significant abilities not just to communicate but also to deter those who use crime as a tool to be able to conceal information that will be critically important for law enforcement.” That’s a real argument. It’s also a reason to update warrant procedures, not to pre-collect data on everyone. The government chose the second option and then insisted it wasn’t surveillance. Canada is, Anandasangaree noted, the only Five Eyes G7 country without a lawful access regime in place. What he didn’t address is whether that’s a gap to close or a standard worth keeping. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Canada’s Public Safety Minister Defends Mass Surveillance Bill appeared first on Reclaim The Net.

Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification
Favicon 
reclaimthenet.org

Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Senator Marsha Blackburn has introduced a 291-page legislative discussion draft that would reshape how information is allowed to exist online. The TRUMP AMERICA AI Act, officially titled the “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry” Act, bundles together Section 230 repeal, expanded AI liability, age verification mandates, and a stack of additional bills that have been circulating separately for years. All of it is wrapped in a national AI framework that claims it is tied to President Trump’s December Executive Order. The bill is framed as pro-innovation, pro-safety, designed to “protect children, creators, conservatives, and communities” while positioning the US to win the global AI race. What the actual 291 pages describe is a system that centralizes regulatory authority, removes the legal protections platforms currently rely on, and hands new enforcement tools to federal agencies, state attorneys general, and private litigants simultaneously. We obtained a copy of the bill for you here. The legal foundation of the modern internet is Section 230 of the Communications Decency Act. It shields platforms from being sued for the content that users post. Without Section 230, platforms could become legally responsible for what their users post, which could mean anything controversial, contested, or legally ambiguous becomes a liability they’ll quietly remove rather than defend. Blackburn’s bill repeals it entirely, after a two-year transition period. Platforms and AI developers could face lawsuits for “defective design,” “failure to warn,” or deploying systems deemed “unreasonably dangerous.” AI platforms would be incentivized to heavily monitor users. Enforcement doesn’t sit only with federal regulators; state attorneys general and private actors both get standing to sue. The downstream effect on publishing is direct. Once liability protections go, platforms can no longer host content neutrally. Reporting on contentious subjects doesn’t need to be factually wrong to become a liability problem. It just needs to be frameable as “harmful.” The predictable result: platforms tighten policies, reduce reach, or quietly stop hosting the material that exposes them most. The bill requires AI developers to prevent “reasonably foreseeable harms” from their systems. “Harm,” “foreseeable,” and “contributing factor” are not defined in fixed terms. They get decided after the fact, by regulators and courts working from evolving interpretations. An AI output can be judged unlawful under standards that didn’t exist when the system produced it. For developers, the rational response is aggressive preemptive restriction: building systems that refuse more, flag more, and generate less of anything that might one day attract a lawsuit. Blackburn frames the bill as clearing up a “patchwork of state laws” through a single national standard. The agencies empowered to define and enforce that standard: the FTC, DOJ, NIST, and Department of Energy. Rather than competing state-level experiments, this creates a centralized governance structure where a handful of federal bodies set the rules for AI development across the entire country. Blackburn’s framework absorbs several existing proposals wholesale. Each one carries its own surveillance and censorship architecture. The Kids Online Safety Act (KOSA) brings algorithmic systems under federal oversight. Platforms would be required to modify personalized recommendation engines, disable infinite scrolling and autoplay, and restrict notification systems to prevent “compulsive usage.” This goes beyond content moderation. It regulates how information gets ranked, delivered, and amplified at the system level. The NO FAKES Act creates new liability for AI-generated replicas of individuals’ voices or likenesses, and extends that liability to platforms that knowingly host unauthorized material. Anyone can sue. Platforms that fail to comply with takedown requirements face substantial fines. The GUARD Act mandates age verification for AI chatbot makers, bans minors from access, and requires additional child safety measures. Age verification at this scale means identity verification. The data collected to confirm someone isn’t a minor doesn’t disappear after the check. The AI LEAD Act introduces federal liability standards covering defective design, failure to warn, and strict liability for AI products deemed “unreasonably dangerous,” the same framework being imported into the broader bill. The bill explicitly declares that training AI models on copyrighted works is not fair use. That single provision opens the door to litigation against virtually every major AI developer. It also establishes liability for unauthorized use of a person’s voice or likeness in AI-generated content, covering both training and deployment. NIST gets directed to develop national standards for content provenance and watermarking of AI-generated media, with requirements that AI providers allow content owners to attach provenance data to their work and prohibitions on its removal. The infrastructure this builds tracks the origin and authenticity of digital content across platforms at a technical level. Surveillance is the word for it, even when it’s being sold as authentication. Removing Section 230 and introducing broad legal exposure creates a system where platforms and AI developers live under constant litigation risk tied to content, outputs, and system behavior. That converts platform self-censorship from a choice into a survival strategy. The bill doesn’t need government agents flagging articles. It just needs to make the legal cost of hosting difficult reporting high enough that platforms do the math themselves. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification appeared first on Reclaim The Net.