Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Nine Bureaucracies Walk Into Your Browser and Ask for ID
Favicon 
reclaimthenet.org

Nine Bureaucracies Walk Into Your Browser and Ask for ID

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. By the time you’re reading this, there’s a decent chance that somewhere, quietly and with a great deal of bureaucratic back-patting, someone is trying to figure out exactly how old you are. And not because they’re planning a surprise party. Not because you asked them to. But because the nine horsemen of the regulatory apocalypse have decided that the future of a “safe” internet depends on everyone flashing their ID like they’re trying to get into an especially dull nightclub. This is the nightmare of “age assurance,” a term so bloodlessly corporate you can practically hear it sighing into its own PowerPoint. This is a sprawling, gelatinous lump of biometric estimation, document scans, and AI-ified guesswork, stitched together into one big global initiative under the cheery-sounding Global Online Safety Regulators Network, or GOSRN. Catchy. Formed in 2022, presumably after someone at Ofcom had an especially boring lunch break, GOSRN now boasts nine national regulators, including the UK, France, Australia, and that well-known digital superpower, Fiji, who have come together to harmonize policies on how to tell whether someone is too young to look at TikTok for adults. The group is currently chaired by Ireland’s Coimisiún na Meán. This month, this merry band of regulators released a “Position Statement on Age Assurance and Online Safety Regulation.” We obtained a copy of the document for you here. Inside this gem of a document is a plan to push shared age-verification principles across borders, including support for biometric analysis, official ID checks, and the general dismantling of anonymity for the greater good of child protection. It insists that all of this should be “accurate, reliable, fair, and non-intrusive.” The pitch, of course, is that it’s all for the kids. But it’s starting to look suspiciously like a surveillance infrastructure. More: The Digital ID and Online Age Verification Agenda Most of these tools rely on facial recognition, third-party credential brokers, and databases that not only guess your age, but also remember you. The moment you hand over your ID to prove you’re 18, that information is out there, possibly shared, possibly stored, and quite possibly turned into a marketing profile. And once this machinery exists, it won’t stop at pornography. Mission creep is the only thing in government that’s ever truly efficient. If they can check your ID to block adult content, they can check it to block content they decide is “psychologically harmful,” “emotionally damaging,” or “financially risky.” According to GOSRN’s own terms, those categories include anything that might affect your “social,” “emotional,” or even “psychological” safety. Which is basically everything. Part of the plan is to make all these systems “interoperable,” which is just regulator-speak for “you’ll only need to have your soul scanned once, and then everyone gets to share it.” The goal is to stop companies from “forum shopping,” or in other words, choosing to operate in countries that don’t insist on scanning your face every time you log in. Ofcom, the UK regulator, is fully on board and already flexing its new muscles. Under the Online Safety Act, it has launched 83 investigations and started handing out fines to websites that fail to deliver “highly effective age assurance.” This is part of what they call “Safety by Design,” but it is actually a regulatory philosophy that wants everything on the internet pre-chewed, sterilized, and algorithmically approved. Anonymity? That’s for criminals and weirdos, didn’t you know? Real people sign in with their real names, linked to their real faces, and behave like good little users in the polite, sterile techno-state. GOSRN might say it’s committed to human rights, democracy, and the rule of law. But its definition of “online harm” is so elastic it could be used to classify sarcasm as a threat to national security. And once everyone agrees on the need for interoperable, identity-based age gates, we won’t just have lost our privacy. We’ll have signed it away, smiling politely, because we were told it was for the children. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Nine Bureaucracies Walk Into Your Browser and Ask for ID appeared first on Reclaim The Net.

New Washington Legislation Makes 3D Printers Surveillance Tools
Favicon 
reclaimthenet.org

New Washington Legislation Makes 3D Printers Surveillance Tools

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Washington lawmakers are advancing two proposals that would expand the state’s control over how 3D printers and similar equipment can be used, citing the spread of untraceable firearms as justification. The measures have raised concern among those who see them as an overreach that risks curbing lawful innovation and digital design freedoms. House Bill 2321 would require that all 3D printers sold in Washington after July 1, 2027, include built-in safeguards that detect and block attempts to produce firearms or firearm components. We obtained a copy of the bill for you here. The measure defines these safeguards as “a firearms blueprint detection algorithm,” which must be able to reject such print requests “with a high degree of reliability” and prevent users from disabling or bypassing the control system. To meet the new rule, manufacturers could either embed the detection algorithm directly in a printer’s firmware, integrate it through preprint software, or use an authentication process that screens design files before printing. Companies that fail to comply could be charged with a class C felony, facing penalties of up to five years in prison and a $15,000 fine. A related bill, House Bill 2320, would prohibit the use of 3D printers, CNC milling machines, or other tools to produce unregistered firearms. It would also make it illegal to distribute or possess digital files capable of creating gun parts. The bill targets both the physical manufacturing of ghost guns and the online exchange of design data used to make them. Representative Osman Salahuddin, who introduced the legislation, said it is meant to close a dangerous gap in state law. “With a 3D printer that costs a few hundred and a digital file that can be downloaded online, someone can now manufacture an untraceable firearm at home,” he said. “No background check, no serial number, and no accountability.” The main problem with all of this is that the algorithm must be built so it cannot be bypassed by a technically skilled user, effectively outlawing the ability to modify a device’s firmware or gain root access to it. In short, tinkering with your own hardware could be treated as a criminal act. Once that model is codified in law, manufacturers would gain a powerful excuse to roll out closed systems that require server authentication or proprietary software to function. The 3D printer would no longer be a tool you own; it would become a managed service, dependent on the company’s servers and subject to its terms. When the server is shut down or the software license expires, the device could simply stop working. There are no provisions in the bill to guarantee continued functionality or support when the manufacturer moves on. This kind of policy invites the same behavior already seen in other industries: forced obsolescence disguised as security. Consumers have watched other “smart” devices turn useless when companies went bankrupt or changed their business models. Embedding mandatory authentication systems into 3D printers guarantees that the same pattern will repeat, except this time, companies can claim they are acting under government mandate. It also poses serious legal and practical problems. Many 3D printers rely on open-source firmware governed by licenses that explicitly permit modification. Mandating that these systems must include unremovable restrictions directly conflicts with those licenses and makes compliance impossible. The bill’s vague definition of “three-dimensional printer” even extends to CNC mills, lathes, and other fabrication tools, threatening far more than just hobbyist printers. The legislation gives the state’s attorney general broad authority to expand what must be blocked in the future, without requiring further legislative approval. That creates a moving target, where new categories of restricted designs could be added at any time, leaving both users and manufacturers scrambling to comply. Supporters of the bill frame it as a matter of public safety, but the mechanism it creates, mandatory, remote-controlled restriction systems, would normalize the idea that ownership of physical devices is conditional. It would turn open hardware into closed platforms and give manufacturers a built-in justification to lock down, disable, or replace products at will. The real question is not whether people should be allowed to 3D print guns; it is whether the state should empower corporations to decide what a person is allowed to do with their own machine. The structure of this law would make permanent the shift from ownership to permission, one firmware update at a time. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post New Washington Legislation Makes 3D Printers Surveillance Tools appeared first on Reclaim The Net.

FBI Accessed Encrypted PCs Using Microsoft Recovery Keys
Favicon 
reclaimthenet.org

FBI Accessed Encrypted PCs Using Microsoft Recovery Keys

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Federal investigators obtained access to encrypted computers for the first time through Microsoft’s own recovery keys, a move that has intensified long-standing concerns about how much control the company retains over user data. The development emerged from United States v. Tenorio, a fraud case in Guam tied to alleged misuse of pandemic unemployment funds. Investigators believed three laptops contained evidence of the scheme. When they discovered the machines were protected with BitLocker, the encryption system built into Windows, they turned to Microsoft. BitLocker is designed to shield all files on a drive by scrambling the data so it can’t be read without a recovery key. Since Windows 10, the system has been enabled automatically on many new PCs. When users sign in with a Microsoft account, those recovery keys are usually uploaded to Microsoft’s servers for convenience. That same design, however, quietly gives the company the technical ability to hand those keys over when faced with a lawful demand. Microsoft confirmed that it complied with the FBI’s warrant, saying it provides recovery keys only when required by law. “While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide… how to manage their keys,” a spokesperson said. According to the spokesperson, the company receives roughly 20 such requests each year, though it cannot always fulfill them because many users never upload their keys to the cloud. The disclosure is believed to be the first known case where Microsoft has given any encryption key to US law enforcement. For years, company engineers had maintained that BitLocker contained no “backdoors” or secret access methods. One engineer publicly stated in 2013 that he had refused a government request to add such capabilities. The incident sets Microsoft apart from other major technology companies that have built systems designed to prevent themselves from accessing user encryption keys. Apple, for instance, faced down the FBI in 2016 after the agency tried to compel it to unlock the iPhones of the San Bernardino shooters. Apple refused, and the FBI ultimately paid a private contractor to break into the device. Google and Meta have taken similar steps to encrypt user backups with keys the companies cannot retrieve. By contrast, Microsoft’s account-based key storage keeps the door open for government access. When encryption keys are saved to Microsoft’s cloud, they are effectively within reach of legal orders from any government with jurisdiction over the company. Users can avoid this by setting up a local Windows account and manually storing their keys offline, but that process has become less visible and less encouraged with newer versions of Windows. Court filings confirm that the search warrant was executed and that prosecutors later disclosed data from one defendant’s computer referencing BitLocker keys Microsoft had provided. Charissa Tenorio, one of the defendants, has pleaded not guilty. Without Microsoft’s cooperation, the FBI would have faced steep technical barriers. Internal documents from Homeland Security Investigations have previously acknowledged that agents “do not possess the forensic tools to break into devices encrypted with Microsoft BitLocker, or any other style of encryption.” The Guam disclosure reveals the fragility of user control when cloud-based systems hold decryption power. Once Microsoft complied with the warrant, investigators gained full visibility into each device’s contents, something privacy advocates view as far beyond the intent of a targeted search. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post FBI Accessed Encrypted PCs Using Microsoft Recovery Keys appeared first on Reclaim The Net.

Nine Bureaucracies Walk Into Your Browser and Ask for ID
Favicon 
reclaimthenet.org

Nine Bureaucracies Walk Into Your Browser and Ask for ID

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. By the time you’re reading this, there’s a decent chance that somewhere, quietly and with a great deal of bureaucratic back-patting, someone is trying to figure out exactly how old you are. And not because they’re planning a surprise party. Not because you asked them to. But because the nine horsemen of the regulatory apocalypse have decided that the future of a “safe” internet depends on everyone flashing their ID like they’re trying to get into an especially dull nightclub. This is the nightmare of “age assurance,” a term so bloodlessly corporate you can practically hear it sighing into its own PowerPoint. This is a sprawling, gelatinous lump of biometric estimation, document scans, and AI-ified guesswork, stitched together into one big global initiative under the cheery-sounding Global Online Safety Regulators Network, or GOSRN. Catchy. Formed in 2022, presumably after someone at Ofcom had an especially boring lunch break, GOSRN now boasts nine national regulators, including the UK, France, Australia, and that well-known digital superpower, Fiji, who have come together to harmonize policies on how to tell whether someone is too young to look at TikTok for adults. The group is currently chaired by Ireland’s Coimisiún na Meán. This month, this merry band of regulators released a “Position Statement on Age Assurance and Online Safety Regulation.” If that sentence made you feel like time had stopped moving, you’re not alone. Inside this gem of a document is a plan to push shared age-verification principles across borders, including support for biometric analysis, official ID checks, and the general dismantling of anonymity for the greater good of child protection. But don’t worry, it insists that all of this should be “accurate, reliable, fair, and non-intrusive,” which is a bit like saying you’d like your chainsaw to be “gentle, precise, and whisper-quiet.” More: The Digital ID and Online Age Verification Agenda The pitch, of course, is that it’s all for the kids. But behind the scenes, it’s starting to look suspiciously like a surveillance infrastructure. Most of these tools rely on facial recognition, third-party credential brokers, and databases that not only guess your age, but also remember you. Forever. The moment you hand over your ID to prove you’re 18, that information is out there, possibly shared, possibly stored, and quite possibly turned into a marketing profile. And once this machinery exists, it won’t stop at pornography. It never does. Mission creep is the only thing in government that’s ever truly efficient. If they can check your ID to block adult content, they can check it to block content they decide is “psychologically harmful,” “emotionally damaging,” or “financially risky.” According to GOSRN’s own terms, those categories include anything that might affect your social, emotional, or even “psychological” safety. Which is basically everything. Part of the plan is to make all these systems “interoperable,” which is just regulator-speak for “you’ll only need to have your soul scanned once, and then everyone gets to share it.” The goal is to stop companies from “forum shopping,” or in other words, choosing to operate in countries that don’t insist on scanning your face every time you log in. Imagine telling someone in 1996 that the internet would one day be patrolled by a global safety committee ensuring you’re old enough to watch a cooking video that contains a swear word. They’d have laughed in your face, then uploaded a .wav file of it to their Geocities page. But here we are. Ofcom, the UK regulator, is fully on board and already flexing its new muscles. Under the Online Safety Act, it has launched 83 investigations and started handing out fines to websites that fail to deliver “highly effective age assurance.” That’s the phrase. “Highly effective.” Not “sensible,” or “proportionate.” “Highly effective,” as in industrial-strength nannying. Spray it over the entire internet until everyone under 18 is bubble-wrapped in an algorithmic playground built by committee. This is part of what they call “Safety by Design,” but it is actually a regulatory philosophy that wants everything on the internet pre-chewed, sterilized, and algorithmically approved. It’s a blunt instrument wielded by people who think the web should be a combination of Sesame Street and LinkedIn. That’s fine if you want to reduce the most dynamic communication tool ever invented into a glorified brochure for soft drink companies, but not so great if you believe in things like privacy, freedom of speech, or not being treated like a criminal. The most alarming part of all this isn’t the bad tech or the condescending tone, it’s the creeping normalization of digital identity checks as the price of entry to online life. Once it’s built, this system will be hard to dismantle. You’ll be expected to prove who you are, how old you are, and what you’re allowed to see. Every. Single. Time. Anonymity? That’s for criminals and weirdos, didn’t you know? Real people sign in with their real names, linked to their real faces, and behave like good little users in the polite, sterile techno-state. And that’s the plan. All wrapped in a warm blanket of child safety, drizzled with concern, and served up by a committee that no one voted for but who’ve decided they know what’s best for everyone. GOSRN might say it’s committed to human rights, democracy, and the rule of law. But its definition of “online harm” is so elastic it could be used to classify sarcasm as a threat to national security. And once everyone agrees on the need for interoperable, identity-based age gates, we won’t just have lost our privacy. We’ll have signed it away, smiling politely, because we were told it was for the children. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Nine Bureaucracies Walk Into Your Browser and Ask for ID appeared first on Reclaim The Net.

UK House of Lords Votes to Extend Age Verification to VPNs
Favicon 
reclaimthenet.org

UK House of Lords Votes to Extend Age Verification to VPNs

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The UK House of Lords has voted to extend “age assurance” requirements, effectively age verification mandates, to virtual private networks (VPNs) and a wide range of online platforms under the Children’s Wellbeing and Schools Bill. The decision deepens the reach of the already-controversial Online Safety Act, linking child safety goals to mechanisms that could have severe effects on private communication and digital autonomy. Under the existing Online Safety Act framework, “user-to-user services” include almost any online platform that enables individuals to post, share, or interact with content from others. This definition covers social networks, messaging apps, forums, and online gaming services. Only a few forms of communication, such as email, SMS, MMS, and one-to-one live voice calls, are explicitly excluded. While political messaging around the vote often described the move as a “social media ban for under-16s,” the actual scope is considerably wider. In effect, most interactive online platforms would now need to collect and verify age data from users, even where those services are not primarily aimed at children. This represents a major expansion of identity checks across digital infrastructure, once considered neutral or privacy-protective, and one of the most disciplinarian proposals in the West. Two key amendments advanced during the Lords debate on January 21. Amendment 92 (“Action to Prohibit the Provision of VPN Services to Children in the United Kingdom”) requires VPNs that are “offered or marketed to persons in the United Kingdom” or “provided to a significant number of persons” to implement age assurance for UK users. The measure passed by 207 Content votes to 159 Not Content votes. Amendment 94a (“Action to Promote the Wellbeing of Children in Relation to Social Media”) mandates that all regulated user-to-user services introduce age assurance systems to prevent under-16s from “becoming or being users.” This proposal passed with 261 Content votes to 150 Not Content votes. Both amendments will proceed to the Bill’s next stage, the third reading in the House of Lords. Two other amendments, both more technologically intrusive, were discussed but rejected. Amendment 93, introduced by Lord Nash, would have compelled smartphone and tablet manufacturers, distributors, and importers to install “tamper-proof system software which is highly effective at preventing the recording, transmitting (by any means, including livestreaming) and viewing of CSAM using that device.” The only plausible way to enforce such a measure would be through constant, automated inspection of every photo, video, and stream on a device. This form of surveillance would have converted personal devices into continuous content monitors, raising severe privacy and accuracy concerns, including potential false positives. Lord Nash stated: “On Amendment 93, I have had a constructive discussion with Ministers on this issue and more discussions are in progress, so I will not push that to a vote today.” Amendment 108, proposed by Lord Storey, would have required user-to-user services “likely to be accessed by children” to set their own minimum age thresholds and use age assurance to enforce them. He argued that a single blanket ban under Amendment 94a was overly rigid. “Having different minimum ages for different platforms would be a better solution,” he said, maintaining that his version would be more effective in practice. Neither of these amendments passed, leaving Amendments 92 and 94a as the only ones to advance. The discussion highlights a deepening push within UK legislation to merge digital identity checks with online participation. While described as safeguarding children, the changes embed a new layer of identity verification across tools once used for privacy, such as VPNs. These services, designed to conceal personal browsing data and protect against profiling, would now face obligations to verify who their users are. This is a contradiction that could erode one of the few remaining shields for private internet use. For now, the most invasive surveillance measure, client-side scanning, has been set aside. However, the fact that it was seriously considered indicates continuing interest in embedding scanning mechanisms directly into personal devices. Whether similar proposals reappear during the third reading remains to be seen. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK House of Lords Votes to Extend Age Verification to VPNs appeared first on Reclaim The Net.