Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

UK House of Lords Backs Under-16 Social Media Ban, Fueling Fears of Digital ID and Lost Anonymity
Favicon 
reclaimthenet.org

UK House of Lords Backs Under-16 Social Media Ban, Fueling Fears of Digital ID and Lost Anonymity

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. So here we are again, trying to fix society with a big red button marked BAN IT, which, naturally, does absolutely nothing except make politicians feel as if they’ve done something useful. This time, the target is children using social media, because if there’s one thing that unites the British political class, it’s the belief that they, and only they, can raise your children better than you can. The House of Lords, a place where hereditary titles and vague indignation go to drink tea, has decided to throw its collective wig behind an Australian-style ban on under-16s using social media. And before you ask, yes, Australia, the same country that once tried to censor the entire internet with a blacklist that would have made a North Korean censor blush. Actor Hugh Grant — yes, the same floppy-haired romantic from your mother’s favorite movie — has been trotted out in support, because nothing says “complex digital policy” quite like a man whose greatest brush with technology was probably a Nokia 3210. Supporters of the ban are throwing around language like “catastrophic harm” and “overwhelming evidence” as though Instagram were made of asbestos and TikTok came with a pack of cigarettes. Lord Nash, former schools minister and current oracle of doom, says the vote “begins the process of stopping the catastrophic harm that social media is inflicting on a generation.” Apparently, everyone from “medical professionals” to “intelligence officers” is in agreement. Of course, those two groups have never been wrong before. Parents, we’re told, are in an “impossible position,” expected to outwit attention-hacking Silicon Valley engineers using nothing more than household Wi-Fi passwords and the vague threat of “consequences.” It’s true, many parents are overwhelmed, but is the answer to hand the steering wheel to the government and give them a shortcut to the dystopian world they seem intent on delivering? Here’s the catch. Australia’s ban is about more than kicking 14-year-olds off Snapchat. It comes with digital ID checks that would make the Stasi do a double-take. Users of all ages now have to prove who they are just to watch cat videos or argue with strangers on Reddit. What better way to teach kids about internet safety than by normalizing mass identification and the elimination of anonymity? And this, of course, is the real story, not just a nanny-state effort to save the children, but a quietly expanding system of mandatory identity verification. A ban on 13-year-olds posting TikToks becomes a system where everyone has to show ID to tweet about potholes or join a Facebook group about recycling. This is where it gets properly dangerous. Because the moment your online speech is tethered to a verified identity, the freedom to speak without fear starts to dissolve. And in a country like the UK, where people have been arrested (arrested!) for saying things that offend on social media, it’s not paranoia to wonder what this system might be used for next. You don’t need to be Julian Assange to understand that linking your identity to every comment, like, or angry emoji is a terrible idea. People say foolish things online. They rant, they joke, they vent. It’s part of the human condition. But if every post is tied to a government-approved digital ID, who’s going to risk saying something controversial? And let’s not kid ourselves. Once this ID system exists, it won’t stop at the kids. Governments, advertisers, law enforcement, and data brokers will all want a taste. If the UK implements a similar scheme, it could mean handing over your passport number just to watch someone play Minecraft. Even the tech giants, those digital Bond villains with privacy policies longer than the Old Testament, are a bit uneasy. American platforms, governed by the First Amendment, may push back. Which raises the delicious prospect of the UK trying to enforce these laws by threatening Mark Zuckerberg with a sternly worded email. Behind all the child-safety sloganeering lies something else: a growing state appetite for control. The idea that you should need permission, proof of age, proof of identity, just to access a website is the stuff of dystopian fiction, only now it’s dressed up in concern for “wellbeing.” If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK House of Lords Backs Under-16 Social Media Ban, Fueling Fears of Digital ID and Lost Anonymity appeared first on Reclaim The Net.

OpenAI’s AI Age Prediction System Turns Age Verification into Widespread User Surveillance
Favicon 
reclaimthenet.org

OpenAI’s AI Age Prediction System Turns Age Verification into Widespread User Surveillance

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Efforts to enforce age restrictions online are now reshaping how major tech platforms monitor their users. OpenAI’s latest addition to ChatGPT, a system that predicts whether someone is under 18 by studying how they use the app, shows how child-safety rules and surveillance-based data collection are becoming closely linked. The company says its new “age prediction model” analyzes a combination of behavioral and account-level data. That includes when a person logs in, how long their account has existed, usage frequency, and their stated age. From those signals, the system estimates whether an account likely belongs to a minor. If that prediction is positive, ChatGPT automatically applies content restrictions designed to limit exposure to material such as self-harm discussions. To regain unrestricted access, flagged users must verify their identity through Persona, an external ID verification company. More: From Roblox To The IRS: The Great Biometric Data Grab  Persona’s privacy policy allows it to collect not only information provided directly by users but also data from outside sources, including brokers, marketing partners, and “publicly available sources…such as open government databases.” The company may also gather device identifiers and geolocation details. This arrangement effectively extends surveillance from OpenAI’s internal monitoring to a larger commercial network that links people’s AI activity with personal and location data. In the process of proving age, companies are building detailed behavioral profiles that make constant observation an ordinary part of digital life. OpenAI describes this approach as a step toward safer experiences for younger users. Yet the method of classifying individuals through behavioral analysis and then requiring identification to override errors establishes a structure that can easily deepen ongoing monitoring. Once collected, these data points can be combined and retained in ways that go beyond the stated goal of protecting minors. This trend is unfolding across the wider tech industry. The Federal Trade Commission is investigating how AI chatbots may affect children and teens, and OpenAI has been named in lawsuits, including one related to a teenager’s death. Lawmakers have also pressured other platforms, such as Roblox, which uses Persona, to demonstrate stronger safeguards for minors. Over the past year, OpenAI has introduced parental controls and set up a mental health advisory group to study how AI influences users’ emotions and motivation. The company says its age prediction system will expand to the European Union “to account for regional requirements” and that it plans to refine its accuracy over time. The push for age verification is evolving into a new model of behavioral tracking, where AI companies quietly build internal profiles of how people interact online. These systems are presented as safety features, yet they depend on the same continuous observation and data aggregation that define modern digital surveillance. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post OpenAI’s AI Age Prediction System Turns Age Verification into Widespread User Surveillance appeared first on Reclaim The Net.

Africa Becomes the Sandbox for Bill Gates and OpenAI’s AI Health Experiment
Favicon 
reclaimthenet.org

Africa Becomes the Sandbox for Bill Gates and OpenAI’s AI Health Experiment

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The Gates Foundation and OpenAI have announced a $50mn initiative to introduce artificial intelligence tools into primary healthcare networks across Rwanda and other African nations by 2028. The project, named Horizon1000, is meant to relieve overwhelmed medical workers and improve access to care, but its approach is renewing questions about how data-driven systems are being tested on vulnerable populations. At the World Economic Forum in Davos, Bill Gates described the plan as a breakthrough for under-resourced countries. “We aim to accelerate the adoption of AI tools across primary care clinics, within communities and in people’s homes,” he said, calling the technology a possible “game-changer in expanding access to quality care.” The foundation and OpenAI say the tools will help with patient records and clinical evaluations, giving health workers more time and better guidance. Gates emphasized that the project will “support health workers, not replace them.” He noted that sub-Saharan Africa faces an estimated shortfall of nearly six million health professionals, leaving many in what he called an “impossible situation” where they must “triage too many patients with too little administrative support, modern technology, and up-to-date clinical guidance.” More: The UN Is Using Africa as a Testing Ground for Controversial Digital ID Systems Hospitals around the world are already experimenting with artificial intelligence to automate medical notes, summarize consultations, and flag potentially serious symptoms. Systems like ChatGPT and Gemini are now used to generate documentation that once required hours of manual effort. Yet this growing dependence on algorithmic systems in healthcare introduces a layer of risk that goes beyond efficiency. To function, these models rely on immense datasets, often containing personal or identifiable medical information. In regions without strong privacy legislation, the line between helpful automation and invasive data collection can easily blur. OpenAI’s chief executive, Sam Altman, highlighted the social potential of the technology, saying: “AI is going to be a scientific marvel no matter what, but for it to be a societal marvel, we’ve got to figure out ways that we use this incredible technology to improve people’s lives.” His statement reflects the optimism surrounding AI in medicine, but the implementation context matters. Africa has become a frequent starting point for large-scale technology pilots funded by global foundations and corporations. From digital identity programs to vaccine logistics, the continent is often chosen for early trials that later influence global health strategies. Gates argues this accelerates innovation where resources are scarce. However, such experiments can also occur in environments where informed consent, data governance, and regulatory oversight are still developing or even non-existent. More: Inside The Bill Gates and Friends’ Plot To Hardwire AI Into Public Services The Gates Foundation has said it will monitor and audit the AI models for safety, bias, and accuracy, rolling out the technology gradually and tailoring it for local needs. Rwanda, for example, has established a national health intelligence centre to use AI in analyzing data at the community level. Language remains a persistent challenge. Many leading AI systems are trained primarily on English-language data, which limits their ability to interpret medical terms and symptoms described in local dialects. A 2023 study from the Massachusetts Institute of Technology found that medical questions containing typos or informal phrasing were between 7 and 9 per cent more likely to trigger an incorrect recommendation against seeking care, even when the clinical meaning was identical. Such findings illustrate how easily a model’s training data can reproduce inequality. Patients who are not fluent in English or who communicate in non-standard ways risk being misunderstood by the very systems designed to assist them. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Africa Becomes the Sandbox for Bill Gates and OpenAI’s AI Health Experiment appeared first on Reclaim The Net.

TSA Proposes MyTSA PreCheck Digital ID, Integrating Biometrics and Federal Databases
Favicon 
reclaimthenet.org

TSA Proposes MyTSA PreCheck Digital ID, Integrating Biometrics and Federal Databases

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The Transportation Security Administration is reshaping how it verifies the identities of US air travelers, proposing a major update that merges biometric data, mobile credentials, and government authentication platforms into one expanded framework. Published in the Federal Register, the notice outlines a new form of digital identification, the MyTSA PreCheck ID, which would extend the agency’s existing PreCheck program into a mobile environment requiring more detailed data from participants. More: TSA Fast Track Programs Are a Deal With The Devil Under the plan, travelers who want to activate the new digital ID on their phones would have to provide additional biographic and biometric details such as fingerprints and facial imagery, along with the information already collected for PreCheck enrollment. The proposal appears alongside TSA’s recently finalized ConfirmID program, a separate fee-based service designed for passengers who arrive at checkpoints without a REAL ID or another approved credential. More: A $45 Fee and Three Ways to Lose Your Privacy Before You Fly TSA is seeking approval from the Office of Management and Budget to revise its public data collection process for trusted traveler programs. The public comment window remains open until March 16. According to the agency, the updates would align PreCheck enrollment with a “modernized” identity infrastructure, consolidating personal and biometric data under a more unified system. Travelers applying for or renewing PreCheck would continue to provide core information such as name, date of birth, and citizenship status, but the new system would further integrate fingerprints and facial data into DHS databases for continuous identity verification. TSA said these biometrics will be compared with FBI records through the Next Generation Identification system, with ongoing checks conducted under the FBI’s Rap Back service for as long as individuals remain active in the program. In addition, biometric data would feed into DHS’s Automated Biometric Identification System, a database that supports continuous vetting and identity confirmation at airport security points. Alongside the new mobile ID, TSA is introducing a Customer Service Portal to centralize how travelers manage their program details. Users would log in through Login.gov, the government’s shared authentication service, to upload documents, change preferences, or opt in and out of certain features. The agency also detailed a cooperative arrangement with U.S. Customs and Border Protection that would allow PreCheck data, both biographic and biometric, to be reused for Global Entry processing if travelers choose to participate. The TSA says this would cut down on duplication across trusted traveler programs. Over the next three years, TSA projects it will process data from more than 25 million people, representing roughly 4.7 million annual administrative hours. Enrollment and renewal fees will stay consistent: $80 for a new application, $70 for online renewals, and $75 for in-person renewals. Meanwhile, the updated ConfirmID program is set to begin on February 1. It offers passengers a way to verify their identity for $45 if they reach a checkpoint without proper identification. The process can be initiated online before arriving at the airport. “TSA ConfirmID will be an option for travelers that do not bring a REAL ID or other acceptable form of ID to the TSA checkpoint and still want to fly,” said Adam Stahl, the senior official performing the duties of TSA deputy administrator. He added that the fee structure is meant to discourage travelers from arriving unprepared while ensuring they can still complete their journey. While TSA presents these changes as a modernization effort, the combination of mobile credentials, biometric retention, and expanded data sharing signals a gradual move toward a more centralized identity model. Travelers are being encouraged to exchange increasing amounts of personal and biological information for convenience at the checkpoint, a tradeoff that continues to reshape what “voluntary” participation means in the context of air travel security. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post TSA Proposes MyTSA PreCheck Digital ID, Integrating Biometrics and Federal Databases appeared first on Reclaim The Net.

Australia Passes New Hate Speech Law, Raising Free Speech Fears
Favicon 
reclaimthenet.org

Australia Passes New Hate Speech Law, Raising Free Speech Fears

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Australia’s federal Parliament has enacted a broad new legal package targeting hate, antisemitism, and extremism, passing the Combatting Antisemitism, Hate and Extremism (Criminal and Migration Laws) Bill 2026 with strong majorities in both chambers. The bill has several implications regarding free speech. The House of Representatives approved it 116 Ayes to 7 Noes, and the Senate passed it 38 Ayes to 22 Noes, sending it into law after an expedited process in response to rising public concern about hate-motivated violence. We obtained a copy of the bill for you here. The government framed the legislation as part of its response to the deadly December terror attack at Bondi Beach that left 15 people dead and focused debate on enhancing public safety and national unity. Attorney General Michelle Rowland and other ministers repeatedly described the new framework as needed to strengthen legal tools against violent hate and extremism. In earlier official statements, Rowland said of the proposal: “Once these laws are passed, they will be the toughest hate laws Australia has ever seen.” Under this new law, a range of conduct tied to hatred or perceived threat can trigger criminal liability, including organizing, supporting, or being involved with groups that authorities designate as engaging in hate-based conduct. A new framework allows the Australian Federal Police Minister to recommend that such groups be listed as “prohibited hate groups.” Being a member of such a group, recruiting, training, or financially supporting it are offenses with penalties that can extend up to 15 years in prison. The Bill grants the executive branch power to designate organizations as prohibited hate groups through regulation. This decision is made by the AFP Minister, based on reasonable satisfaction, with advice from intelligence agencies. Crucially, the legislation explicitly removes any requirement for procedural fairness in this process. An organization may be listed even if: No criminal conviction has occurred The relevant conduct occurred before the law existed The organization is based outside Australia The evidence relied upon is classified and undisclosed Once an organization is listed, the consequences are severe. Membership, recruitment, training, funding, or providing support becomes a serious criminal offense carrying lengthy prison terms. The criminal provisions for hate conduct are built around whether specific public behavior would cause a reasonable person in the target group “to feel intimidated, to fear harassment or violence, or to fear for their safety.” This standard can apply even where there is no evidence that anyone actually experienced fear or harm. The definition is tied to subjective perceptions of risk, rather than solely observable incitement to violence. The Bill expands the “reasonable person” test used in hate-related offenses. Speech may now be criminal if a so-called reasonable person in the targeted group would consider it offensive, insulting, humiliating, or intimidating. Violence or threats of violence are not required. This standard introduces subjectivity into criminal law. Political speech on immigration, religion, nationalism, or identity frequently causes offense or humiliation to some audiences. Under this framework, harsh criticism, protest slogans, or satire could attract criminal liability based on emotional impact rather than demonstrable harm. A democratic society depends on the ability to offend, challenge, and provoke. Criminalizing offense risks sanitizing public debate into only what is officially acceptable. The legislation also expands the existing ban on “prohibited hate symbols,” creating criminal offenses for displays of banned symbols unless justified on narrow grounds such as religious, academic, journalistic, or artistic use. While proponents argue this targets conduct that fuels hatred, similar symbolic bans in other jurisdictions such as Germany have often ensnared educational or historical contexts. The Bill also significantly alters existing offenses relating to prohibited symbols. Previously, exemptions for religious, academic, artistic, or journalistic purposes operated as clear carve-outs. Under the new framework, the defendant bears the evidential burden of proving that their conduct was for a protected purpose and was not contrary to the public interest. This reversal matters. The presumption shifts from lawful expression to presumed criminality unless the speaker can justify themselves after the fact. Journalists must demonstrate that they were acting in a professional capacity and that their reporting met an undefined public-interest standard. Artists, educators, and researchers face similar uncertainty. Such burden-shifting mechanisms are well known to chill speech, particularly in investigative journalism and political commentary where legal certainty is essential. Migration rules have been significantly altered. The law amplifies the Home Affairs Minister’s powers to refuse entry or cancel visas for non-citizens judged to be associated with extremist groups or hate conduct. Free speech defenders have warned that the combination of low subjective thresholds and expanded administrative powers creates risks that lawful expression, dissenting views, or controversial speech could be swept into criminal or immigration sanctions. They argue that this effect stems from how the law equates emotional or perceived intimidation with actionable hate, a departure from frameworks where provable harm or incitement to violence is required. Taken together, these provisions produce a powerful chilling effect across political communication, journalism, academic inquiry, religious teaching, and civil association. The cumulative structure of the Bill incentivizes silence, conformity, and disengagement from controversial debate. In a country that relies on an implied, rather than explicit, freedom of political communication, this legislation tests the outer limits of democratic tolerance. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Australia Passes New Hate Speech Law, Raising Free Speech Fears appeared first on Reclaim The Net.