Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Apple Forces UK iPhone Age Checks in iOS 26.4
Favicon 
reclaimthenet.org

Apple Forces UK iPhone Age Checks in iOS 26.4

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. With iOS 26.4, Apple has turned every iPhone in the UK into an identity checkpoint. The update, released March 24, requires all UK users to confirm they’re 18 or older before accessing certain features and services on their Apple Account. UK communications regulator Ofcom called it “a real win for children and families.” The infrastructure being built is more of a problem than that framing suggests. Apple has, without warning, placed a gatekeeper on the devices of 35 million UK users who paid good money for full-featured smartphones and now find themselves holding something closer to a supervised children’s tablet. It’s a corporate ultimatum: hand over sensitive personal data or lose functionality you already paid for. The verification prompt appears immediately after the update installs. Apple checks whether your account already has a credit card linked or whether the account has existed long enough to establish you as an adult. For many existing users, the process is essentially automatic. For everyone else, the options narrow quickly: link a credit card, scan a government-issued photo ID, or accept that your account defaults to teen restrictions, with Apple’s Web Content Filter and Communication Safety features switched on across all browsers and messaging apps and FaceTime, monitoring communications. Web Content Filter blocks websites Apple classifies as explicit, operating across Safari and third-party browsers alike. Communication Safety scans incoming and outgoing images and videos for nudity. Both activate silently for anyone who hasn’t cleared the adult threshold. Skip verification, or lack a credit card and a government ID, and Apple decides what you’re allowed to see. Users without a credit card or government ID have no other path. Reports from UK users confirm it. Scan the card, upload the ID, or live with restricted access. The system doesn’t offer alternatives. Ofcom praised the rollout in a statement, saying it had coordinated extensively with Apple and others on age assurance under the Online Safety Act: “Apple’s decision that the UK will be one of the first countries in the world to receive new child safety protections on devices is a real win for children and families…We’ve worked closely with Apple and other services to ensure they can be applied in a variety of contexts in order to ensure users are protected. This will build on the strong foundations of the Online Safety Act, from widespread age checks that keep young people away from harmful content, to blocking high-risk sites and stepping up action against child sexual abuse material.” Notably, Apple wasn’t legally required to do this. The Online Safety Act’s age verification obligations apply to platforms and adult content sites, not to device operating systems or app stores. Apple chose to go further, and Ofcom chose to celebrate it. The reality of the UK’s wider age verification push is that it hasn’t worked. VPN usage spiked dramatically when the Online Safety Act came into force, with NordVPN reporting a 1,000% increase in UK sign-ups and Proton VPN seeing 1,400% more in the first days after enforcement. Some users bypass facial age-scanning on websites by holding a photo of an older person to the camera. The UK isn’t the endpoint. Apple has been watching age verification legislation build momentum globally, with US federal pressure, state-level requirements beginning in Utah, and ongoing industry lobbying all pointing in the same direction. iOS 26.4 makes the UK a test case. The system designed here will likely expand if other jurisdictions get their way. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Apple Forces UK iPhone Age Checks in iOS 26.4 appeared first on Reclaim The Net.

Missouri v. Biden Consent Decree: US Government Admits Pressuring Social Media Platforms to Censor Protected Speech
Favicon 
reclaimthenet.org

Missouri v. Biden Consent Decree: US Government Admits Pressuring Social Media Platforms to Censor Protected Speech

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The Trump administration and the plaintiffs in Missouri v. Biden signed a consent decree on Monday, ending one of the most significant First Amendment lawsuits in recent memory with a formal, court-enforceable admission: the federal government pressured social media platforms to silence protected speech, and it cannot do so again. We obtained a copy of the consent decree for you here. The decree lands at the end of years of litigation that began when the States of Missouri and Louisiana, joined by Gateway Pundit publisher Jim Hoft, Dr. Aaron Kheriaty, and activist Jill Hines, filed suit alleging that Biden administration officials had run what their legal filings described as “a coordinated censorship operation emanating from the highest levels of government.” The lawsuit survived a Supreme Court ruling in 2024 that blocked a preliminary injunction on standing grounds, with the majority explicitly declining to rule on the merits. The case returned to the district court in Louisiana, discovery continued, and eventually both sides concluded that settling was preferable to prolonged litigation. The consent decree prohibits the Surgeon General, the CDC, and CISA from taking any action, formal or informal, direct or indirect, to threaten Facebook, Instagram, X, LinkedIn, or YouTube with punishment unless those platforms delete content containing protected speech. The decree also bars those agencies from unilaterally directing or vetoing the platforms’ content moderation decisions. The agreement runs for ten years and is enforceable by the named plaintiffs if violated. The decree’s preamble makes plain what the government is conceding. Quoting President Trump’s Executive Order 14149, signed on his first day back in office, the document states that the previous administration had “[t]rampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve.” It continues: the Federal Government “infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.” The government signed that. On the question of “misinformation,” the decree is equally direct. The parties agreed that “government, politicians, media, academics, or anyone else applying labels such as ‘misinformation,’ ‘disinformation,’ or ‘malinformation’ to speech does not render it constitutionally unprotected.” Citing the Supreme Court’s United States v. Alvarez, the agreement acknowledges that some false statements are inevitable in open public discourse, and the First Amendment protects that space. The real scope of what the Biden administration built has been documented through the litigation’s discovery process. Government agencies and the White House directed social media platforms to remove viewpoints conflicting with federal messaging on COVID-19, the 2020 election, and the Hunter Biden laptop story. The FBI ran weekly calls with major tech companies ahead of the 2020 election. At its peak, the government’s real-time content monitoring flagged 2.5 percent of all tweets on Twitter as “potential misinformation.” In a single request, the FBI demanded that Twitter delete 929,000 tweets it characterized as foreign speech. Mark Zuckerberg publicly acknowledged in 2024 that the pressure campaign existed and that he regretted Facebook’s participation in it. The decree is, without question, a limited instrument. It covers only the named plaintiffs’ social media accounts on those five platforms, and only those three agencies. It does not bind every federal department or protect every American’s posts. John Vecchione, senior litigation counsel at the New Civil Liberties Alliance and one of the plaintiffs’ attorneys, offered his own accounting of what four years of litigation produced: “This case began with a suspicion that blossomed into fact, that led to Congressional hearings and an Executive Order that government censorship of Americans’ social media posts should end. Freedom of speech has been powerfully preserved by our clients, past and present, who initiated this suit.” A consent decree is only as useful as the court’s willingness to enforce it and the plaintiffs’ ability to detect violations and bring them to court promptly. The decree gives the named agencies fifteen business days to remedy any identified violation before the plaintiffs can seek court relief, and limits judicial remedies to retracting the offending statements and voiding their effect. That is a constrained enforcement mechanism, though it is a real one. Vecchione has indicated he will publish an op-ed explaining the mechanics of how future enforcement would work, and NCLA notes the plaintiffs retain the right to return to court if the government violates the terms. What is genuinely significant here is the admission itself. The government of the United States has agreed, in a binding legal document, that labeling speech “misinformation” does not strip it of constitutional protection, that coercing private companies to delete protected posts violates the First Amendment, and that the previous administration did exactly that. For years, official channels denied the operation existed. Congressional hearings documented it anyway. An executive order condemned it. Now, a consent decree formally acknowledges it and places enforceable limits on its recurrence, at least for the plaintiffs who spent four years litigating to get this far. The consent decree awaits final approval from Judge Terry Doughty of the Western District of Louisiana. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Missouri v. Biden Consent Decree: US Government Admits Pressuring Social Media Platforms to Censor Protected Speech appeared first on Reclaim The Net.

Ohio’s Second Attempt at Adult Website Age ID Verification Advances
Favicon 
reclaimthenet.org

Ohio’s Second Attempt at Adult Website Age ID Verification Advances

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Ohio is taking a second shot at forcing adult websites to verify users’ ages, and this time the legislature is trying to close the legal escape route that let adult websites and others walk away from the first attempt. The Innocence Act, House Bill 84, passed the Ohio House on March 18 and moved to the Senate the following day. We obtained a copy of the bill for you here. The bill requires any company that “sells, delivers, furnishes, disseminates, provides, exhibits, or presents any material or performance that is obscene or harmful to juveniles on the internet” to deploy age verification. There are no carve-outs for platforms that host third-party content. That shelter is exactly what Aylo, Pornhub’s parent company, claimed under Ohio’s original age verification law. Section 230 of the Communications Decency Act shields platforms from liability for content posted by their users, and Aylo argued that hosting user-generated content made it an “interactive computer service” under that definition, exempting it from Ohio’s age-gating requirements. The argument worked. The original law’s language mirrored the federal statute closely enough that Aylo and other adult platforms successfully sidestepped enforcement entirely. HB 84 rewrites those definitions to cut off that route. It also replaces the criminal penalties from an earlier version of the bill, which included misdemeanor charges for minors who bypassed content blocks, with civil fines reaching $100,000 per day for noncompliance. Enforcement falls to Ohio Attorney General Dave Yost, whose office worked with Republican state Reps. Steve Demetriou and Josh Williams on the bill’s drafting. The measure passed the House Technology and Innovation Committee unanimously before advancing to a floor vote, and a path to Governor Mike DeWine’s signature looks clear. The age verification these laws require is worth examining directly. To access legal content as an adult, users must submit identity documents, biometric data, or other credentials to platforms or third-party verification services. That data then exists somewhere, held by someone, subject to breach, subpoena, and uses that weren’t disclosed at the point of collection. The stated goal is to protect children. The actual mechanism is building a database of adults who watch pornography, linked to a verifiable identity. Demetriou introduced an earlier Innocence Act version that imposed criminal penalties on minors who circumvented age blocks, a provision that treated teenagers as criminals for doing what teenagers do online. That’s gone from HB 84. What remains is the identity verification infrastructure itself, framed as child protection while functioning as a surveillance requirement for adult content consumption. Ohio isn’t alone in pursuing this, but it is among the states most determined to make it work regardless of the legal obstacles that keep appearing in the way. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Ohio’s Second Attempt at Adult Website Age ID Verification Advances appeared first on Reclaim The Net.

GrapheneOS Defies Age Verification Surveillance Laws, Vowing to Protect User Privacy Worldwide
Favicon 
reclaimthenet.org

GrapheneOS Defies Age Verification Surveillance Laws, Vowing to Protect User Privacy Worldwide

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. GrapheneOS has a simple answer to the wave of age verification laws moving through US state legislatures and already live in Brazil: no. The privacy-focused Android fork announced last Friday that it won’t implement the age data collection these laws demand. “GrapheneOS will remain usable by anyone around the world without requiring personal information, identification, or an account,” the project stated. “If GrapheneOS devices can’t be sold in a region due to their regulations, so be it.” That’s a blunter response than most OS developers are willing to give, and it’s worth understanding what it’s actually refusing. More: An Introduction to GrapheneOS Brazil’s Digital ECA (Law 15.211) came into force on March 17, hitting OS providers with fines of up to R$50 million, roughly $9.5 million per violation, for failing to build age verification into device setup. California’s Digital Age Assurance Act, AB-1043, signed by Governor Newsom in October 2025 and effective January 1, 2027, goes further: it requires every OS provider to collect a user’s age or date of birth during account setup, then push that data to app stores and developers through a real-time API. Colorado’s SB26-051 cleared the state senate on March 3 with similar demands. The architecture these laws collectively envision is an age-linked identity layer baked into the operating system itself, present before you’ve opened a single app. GrapheneOS is developed by the GrapheneOS Foundation, a registered Canadian nonprofit. California’s AB-1043 carries civil penalties of up to $2,500 per affected child for negligent violations and $7,500 for intentional ones, enforced by the state attorney general. The Canadian nonprofit status provides some distance but not a guarantee. The stakes grew more concrete after GrapheneOS and Motorola announced a partnership at MWC on March 2, bringing the hardened OS to future Motorola hardware and ending GrapheneOS’s long exclusivity to Google Pixel devices. A GrapheneOS-powered Motorola phone is expected in 2027. Once a major hardware manufacturer ships devices with GrapheneOS pre-installed, those products need to comply with local regulations in every market where they’re sold, or Motorola will have to restrict sales geographically. The defiant stance that’s easy for a nonprofit software project becomes a commercial problem for a global device manufacturer. GrapheneOS isn’t alone in refusing. The developers of DB48X, an open-source calculator firmware, recently issued a legal notice stating their software “does not, cannot, and will not implement age verification.” MidnightBSD went further by updating its license to block users in Brazil entirely. The refusals are coming from projects that share a core belief: building government-mandated surveillance infrastructure into software is worse than losing market access. Over 400 computer scientists signed an open letter arguing that these laws build surveillance architecture without meaningfully protecting children. The real output is a mandatory data pipeline connecting OS providers, app stores, and developers, with real-time ID signals tied to device setup and no clear answer to what that infrastructure gets used for in the future. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post GrapheneOS Defies Age Verification Surveillance Laws, Vowing to Protect User Privacy Worldwide appeared first on Reclaim The Net.

FC Barcelona Fined for Privacy Violations Over Biometric Data Collection
Favicon 
reclaimthenet.org

FC Barcelona Fined for Privacy Violations Over Biometric Data Collection

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. FC Barcelona got fined €500,000 ($579,219) for scanning the faces and recording the voices of over 100,000 members without doing the legal homework first. Spain’s data protection authority, the AEPD, found the club had deployed biometric identity verification during a membership census update and processed all of it without a valid Data Protection Impact Assessment. Members renewing their details remotely were required to either submit a facial scan through their device camera or record their voice. Both systems were live, both were processing biometric data at scale, and the documentation Barcelona produced to justify any of it didn’t meet the bar GDPR sets for high-risk processing. Article 35 of the GDPR requires organizations to conduct a DPIA before deploying any system likely to create a high risk for individuals. Biometric data used for identification qualifies automatically. Processing that touches more than 100,000 people, including minors, qualifies. Using new technologies qualifies. Barcelona’s system hit all three. The AEPD concluded the club’s documentation was missing the essential components of a genuine assessment: no real necessity and proportionality analysis, no adequate evaluation of what the processing actually risks for the people whose faces and voices it captured. The AEPD’s decision in case PS-00450-2024 makes one point with particular clarity: consent doesn’t substitute for a DPIA. Barcelona had asked members to agree to biometric data collection, and members had agreed. That agreement is legally irrelevant to the separate procedural obligation to assess risk before the system goes live. The GDPR treats them as independent requirements. Satisfying one doesn’t discharge the other. What a valid DPIA actually requires, according to the decision, is a clear description of the processing, a genuine necessity and proportionality assessment, a detailed risk evaluation, proposed mitigation measures, and a residual risk assessment after mitigations are applied. Organizations that generate DPIA documentation as a compliance checkbox, without substantively working through those questions, remain exposed regardless of what consent language they put in front of users. The appetite for facial biometric data has become near-universal across industries, and the Barcelona case lands in a moment when that appetite is accelerating faster than the rules meant to govern it. Banks deploy facial recognition for customer onboarding. Retailers use it for age verification at the point of sale. Hospitals scan patients at check-in. Stadiums have replaced tickets with face scans. The framing is always convenience, security, or safety. But organizations across every sector are building permanent biometric records of the people they serve, often without seriously asking whether they need to. Facial recognition now accounts for nearly 30% of biometric authentication usage among American users, with roughly 131 million daily interactions processed across the country. More than half the US population engages with recognition systems regularly. That infrastructure touches the daily lives of hundreds of millions of people, most of whom have little meaningful understanding of what’s being captured or where it goes. The fundamental problem with all of this is one that organizations consistently downplay: facial biometrics cannot be changed if compromised, creating a permanent vulnerability that persists throughout an individual’s lifetime. Unlike passwords, credit cards, or even social security numbers, facial features represent permanent identifiers that cannot be reset or replaced. When a company stores your face geometry and gets breached, you don’t get to change your face. The exposure is for life. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post FC Barcelona Fined for Privacy Violations Over Biometric Data Collection appeared first on Reclaim The Net.