Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Australia’s High Court to Hear Challenge to Under-16 Social Media Ban and Digital ID Law
Favicon 
reclaimthenet.org

Australia’s High Court to Hear Challenge to Under-16 Social Media Ban and Digital ID Law

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Australia’s High Court will hear a major constitutional challenge to the federal government’s new under-16 social media ban/digital ID law, fighting for free expression and digital privacy. The case, The Digital Freedom Project Incorporated & Ors v Commonwealth of Australia (S163/2025), was filed by the Digital Freedom Project (DFP), a New South Wales-based organization campaigning against government expansion in the online space, alongside two 15-year-old plaintiffs, Noah Jones and Macy Neyland. We obtained a copy of the filing for you here. The contested law, the Online Safety Amendment (Social Media Minimum Age) Act 2024, will prohibit anyone under sixteen from holding a social media account from December 10, 2025. Platforms must verify user ages and impose restrictions or face penalties. This means eroding the privacy of all of their users. The plaintiffs say this “trespasses on the Constitutional right of freedom of political communication and is therefore unlawful.” The statement of claim filed with the High Court describes how the new regime forces all Australians, not only minors, into intrusive age verification systems. It argues that Macy Neyland “will be required to verify her age and identity to continue using her social media accounts,” which means “she will have her privacy compromised if she is required to upload personal identification (like a passport or driver’s license)” and that “she will lose her online anonymity, making her identifiable to social media companies and potentially others.” Jones, who uses online media for civics and political engagement, claims the law “prevent[s] or substantially burden[s] his ability to access, receive, and participate in political communication online.” The DFP submission argues that logged-out browsing is “not a meaningful substitute for the interactive functions which are integral to and necessary for contemporary modes of free political communication” for young Australians. The group’s president, NSW MP John Ruddick, framed the issue as one that affects every citizen, not just minors. “This issue should concern every Australian. This ban is disproportionate and will trespass either directly or indirectly upon the rights of every Australian,” he said. “Parental supervision of online activity is today the paramount parental responsibility. We do not want to outsource that responsibility to government and unelected bureaucrats.” Ruddick went further, calling the law “the most draconian legislation of its type in the world,” adding, “Even the Chinese Communist Party would be drooling over this.” Both young applicants also spoke publicly about their reasons for joining the case. Jones said, “We are the true digital natives and we want to remain educated, robust, and savvy in our digital world. We’re disappointed in a lazy government that blanket bans under-16’s rather than investing in programs to help kids be safe on social media. They should protect kids with safeguards, not silence.” Neyland said, “Young people like me are the voters of tomorrow. Why on earth should we be banned from expressing our views? If you personally think that kids shouldn’t be on social media, stay off it yourself, but don’t impose it on me and my peers. Driving us to fake profiles and VPNs is bad safety policy. Bring us into safer spaces, with rules that work: age-appropriate features, privacy-first age assurance, and fast takedowns. We shouldn’t be silenced. It’s like Orwell’s book 1984, and that scares me.” The Writ of Summons details the plaintiffs’ position that the legislation is “not reasonably appropriate and adapted” to achieve its stated purpose of protecting children from harm. It outlines less invasive alternatives, including “parental-consent requirements (particularly for 14–15-year-olds), legislating an enforceable duty-of-care/design-safety obligations on providers, limiting the definition of ‘age-restricted social media platforms’…strengthened reporting/takedown standards; and digital literacy programs in schools.” The filing also notes that the law “will have the effect of sacrificing a considerable sphere of freedom of expression and engagement for 13 to 15 year olds” and describes the blanket ban as “an oppressive, overreaching and inappropriate means to achieve the object of child protection.” Bizarrely, in response to the lawsuit, Communications Minister Anika Wells accused the challengers of trying to intimidate the government. “Despite the fact that we are receiving threats and legal challenges from people with ulterior motives, the Albanese government remains steadfastly on the side of parents and not of platforms,” she said. “We will not be intimidated by threats. We will not be intimidated by legal challenges. We will not be intimidated by big tech. On behalf of Australian parents, we stand firm.” Supporters of the challenge warn that the law effectively introduces a nationwide identity-check system for accessing social platforms. Such a requirement will erode anonymity and create a precedent for wider digital identification systems, reshaping online participation in ways that reach far beyond the intended age group. The High Court’s ruling will determine whether the government’s approach to online safety can survive constitutional scrutiny, and whether privacy and political communication will remain protected spaces in Australia’s digital democracy. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Australia’s High Court to Hear Challenge to Under-16 Social Media Ban and Digital ID Law appeared first on Reclaim The Net.

British Transport Police Launch Facial Recognition Trials in London Stations
Favicon 
reclaimthenet.org

British Transport Police Launch Facial Recognition Trials in London Stations

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Some people, when they want to improve public transport safety, hire more staff, fix the lighting, or maybe even try being on time. The British Transport Police, however, have gone full Black Mirror, deciding the best way to protect you from crime on your morning commute is by pointing cameras at your face and feeding your biometric soul into a machine. Yes, for many Britons, facial recognition is coming to a railway station near them. Smile. Or don’t. It makes no difference. The algorithm will be watching anyway. In the coming weeks, British Transport Police (BTP) will be trialling Live Facial Recognition (LFR) tech in London stations. It’s being sold as a six-month pilot program, which in government-speak usually means it will last somewhere between forever and the heat death of the universe. The idea is to deploy these cameras in “key transport hubs,” which is bureaucratic code for: “places you’re likely to be standing around long enough for a camera to decide whether or not you look criminal.” BTP assures us that the system is “intelligence-led,” which doesn’t mean they’ll be targeting shady characters with crowbars, but rather that the cameras will be feeding your face into a watchlist generated from police data systems. They’re looking for criminals and missing people, they say. But here’s how it works: if your face doesn’t match anyone on the list, it gets deleted immediately. Allegedly. If it does match, an officer gets a ping, stares at a screen, and decides whether you’re a knife-wielding fugitive or just a man who looks like one. And you have to love the quaint touch of QR codes, and signs stuck up around the station letting you know that, yes, your biometric identity is being scanned in real time. Chief Superintendent Chris Casey would like you to know that “we’re absolutely committed to using LFR ethically and in line with privacy safeguards.” The deployments, we’re told, will come with “internal governance” and even “external engagement with ethics and independent advisory groups.” As Matthew Feeney from Big Brother Watch put it, without even a hint of sarcasm, which is admirable under the circumstances, “subjecting law-abiding passengers to mass biometric surveillance is a disproportionate and disturbing response.” He’s right. Because this isn’t targeted policing. It’s dragnet surveillance. Feeney continues: “Facial recognition technology remains unregulated in the UK and police forces are writing their own facial recognition rules.” Which is a bit like letting the fox draw up security protocols for the henhouse. Except the fox has facial recognition, and the hens can’t opt-out. Let’s be honest. The police love gadgets. But there’s a difference between using technology to make policing smarter and using it to make policing easier by turning humans into data points. This is a technology that, if misused (and let’s be honest, when has that not happened?), can turn a routine station visit into a Kafkaesque nightmare. And just when you thought it couldn’t get worse, it turns out this isn’t some quirky BTP one-off. It’s part of a national push. The government is now drawing up official guidance to help police decide when and where to aim their surveillance lasers. Policing minister Sarah Jones proudly announced it during the Labour Party conference, calling live facial recognition “a really good tool.” Like a hammer, one assumes, if the problem is everyone’s face. The Home Office has already splashed cash across seven more regions. Greater Manchester, West Yorkshire, Surrey, Sussex, Bedfordshire, Thames Valley, and Hampshire are all next in line for the big biometric bingo. In London, the Met’s watchlists have more than doubled since 2020. Tens of thousands of people scanned every single day. And still no specific law governing any of it. Police forces are writing their own rulebooks while Parliament takes a long nap in the corner. As we’ve previously reported, of course, the system’s already gone wrong. Shaun Thompson, a volunteer working to keep kids out of gangs, was wrongly flagged and stopped outside London Bridge. Despite showing ID and explaining himself, he was threatened with arrest. Now he’s suing. Because if the machine can’t tell a youth mentor from a fugitive, maybe it’s not the public that needs to be scrutinized. Maybe it’s the tech and the people pushing it. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post British Transport Police Launch Facial Recognition Trials in London Stations appeared first on Reclaim The Net.

Meta Pushes Canada for App Store Age Verification Laws
Favicon 
reclaimthenet.org

Meta Pushes Canada for App Store Age Verification Laws

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Meta is working to convince the Canadian government to introduce new laws that would make age verification mandatory at the app store level. The company has been lobbying Ottawa for months and says it has received positive feedback from officials drafting online safety legislation. To support its push, Meta paid Counsel Public Affairs to poll Canadians on what kinds of digital safety measures they want for teens. The poll found that 83 percent of parents favor requiring app stores to confirm users’ ages before app downloads. Meta highlighted those results, saying “the Counsel data clearly indicates that parents are seeking consistent, age-appropriate standards that better protect teens and support parents online. And the most effective way to understand this is by obtaining parental approval and verifying age on the app store.” Rachel Curran, Meta Canada’s director of public policy, described the idea as “by far the most effective, privacy-protective, efficient way to determine a user’s age.” That phrase may sound privacy-conscious, but in practice, the plan would consolidate control over personal data inside a small circle of corporations such as Meta, Apple, and Google, while forcing users to identify themselves to access basic online services. Google has criticized Meta’s proposal, calling it an attempt to avoid direct responsibility. “Time and time again, all over the world, you’ve seen them push forward proposals that would have app stores change their practices and do something new without any change by Meta,” said Kareem Ghanem, Google’s senior director of government affairs. Behind these corporate disputes lies a much bigger question: should anyone be required to verify their identity in order to use the internet? Embedding age checks at the operating system or app store level might sound simple, but it comes with profound consequences. Once the ability to install or use software depends on a verified identity, anonymity and therefore freedom of expression start to disappear. Putting verification inside the operating system could slightly reduce redundant data collection, yet it also creates a powerful central switch that determines who can participate online. A system-level age flag becomes another tracking mechanism tied directly to a user’s device, one that companies can link to behavioral data already gathered from browsing, shopping, and messaging. Open and independent technology would be most at risk. Community-driven projects like Linux distributions, open-source browsers, and privacy-respecting tools often avoid handling identity data precisely because it endangers users and creates liability. If age verification becomes embedded at the OS level, these developers could be pushed toward government-linked ID systems simply to stay compatible. The choice would be stark: integrate surveillance or disappear. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Meta Pushes Canada for App Store Age Verification Laws appeared first on Reclaim The Net.

The Next Surveillance Boom Is Taking Flight
Favicon 
reclaimthenet.org

The Next Surveillance Boom Is Taking Flight

The FBI is going drone shopping again. But this time, they’re not just looking for something that can hover over a crime scene or follow a fleeing suspect. According to new federal procurement documents, the Bureau wants to bolt artificial intelligence onto its unmanned aerial systems, an innovation that sounds less like law enforcement and more like a Silicon Valley beta test for dystopia. Become a Member and Keep Reading… Reclaim your digital freedom. Get the latest on censorship, cancel culture, and surveillance, and learn how to fight back. Join Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post The Next Surveillance Boom Is Taking Flight appeared first on Reclaim The Net.

Missouri Locks the Web Behind a “Harmful” Content ID Check
Favicon 
reclaimthenet.org

Missouri Locks the Web Behind a “Harmful” Content ID Check

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Starting November 30, 2025, people in Missouri will find the digital world reshaped: anyone wishing to visit websites containing “harmful” adult material will need to prove they are at least 18 years old by showing ID. This new requirement marks Missouri’s entry into the growing group of US states adopting age verification laws for online content. Yet the move does more than restrict access; it raises serious questions about how much personal data people must surrender just to browse freely. For many, that tradeoff is likely to make privacy tools like VPNs a near necessity rather than a choice. The law defines its targets broadly. Any site or app where over one-third of the material is classified as “harmful to minors” must block entry until users confirm their age. Those who do not comply risk penalties that can reach $10,000 a day, with violations categorized as “unfair, deceptive, fraudulent, or otherwise unlawful practices.” To meet these standards, companies are permitted to check age through digital ID systems, government-issued documents such as driver’s licenses or passports, or existing transactional data that proves a person’s age. More: The Digital ID and Online Age Verification Agenda Adding another layer of complexity, mobile operating systems with at least ten million US users must provide a built-in verification mechanism that external sites can rely on. Missouri’s statute does attempt to address privacy directly, stating that companies must “use all reasonable methods” to safeguard personal data and avoid storing identifying information unless required by law enforcement. But few observers are convinced that this language ensures real protection against misuse or breaches. That concern is not theoretical. When a service used by Discord was breached, over 70,000 government ID photos were leaked. It became an alarming reminder of how fragile “secure” verification systems can be once private data is demanded for frivolous reasons. What sets Missouri’s version apart is its expectation that tech companies like Apple and Google will now play an active role. These firms are required to make available a digital ID tool that external websites can use to confirm a user’s age. The complication, however, is that such technology remains in its infancy, currently used mostly for digital driver’s licenses and airport identity verification. Missouri’s age check rule reflects a broader national pattern: an increasing willingness by lawmakers to tie access to personal identification. Each time a state builds such a system, it moves the country closer to a digital environment where proof of identity becomes a condition for participation online, a direction that prioritizes control over autonomy and leaves open vast potential for misuse. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Missouri Locks the Web Behind a “Harmful” Content ID Check appeared first on Reclaim The Net.