Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Britain’s AI Policing Plan Turns Toward Predictive Surveillance and a Pre-Crime Future
Favicon 
reclaimthenet.org

Britain’s AI Policing Plan Turns Toward Predictive Surveillance and a Pre-Crime Future

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Let me take you on a tour of Britain’s future. It’s 2030, there are more surveillance cameras than people, your toaster is reporting your breakfast habits to the Home Office, and police officers are no longer investigating crimes so much as predicting them. This is Pre-Crime UK, where the weight of the law is used against innocent people that an algorithm suspects may be about to commit a crime. With a proposal that would make Orwell blush, the British police are testing a hundred new AI systems to figure out which ones can best guess who’s going to commit a crime. That’s right: guess. Not catch, not prove. Guess. Based on data, assumptions, and probably your internet search history from 2011. Behind this algorithmic escapade is Home Secretary Shabana Mahmood, who has apparently spent the last few years reading prison blueprints and dystopian fiction, not as a warning about authoritarian surveillance, but as aspiration. In a jaw-dropping interview with former Prime Minister and Digital ID peddler Tony Blair, she said, with her whole chest: “When I was in justice, my ultimate vision for that part of the criminal justice system was to achieve, by means of AI and technology, what Jeremy Bentham tried to do with his Panopticon. That is that the eyes of the state can be on you at all times.” Now, for those not fluent in 18th-century authoritarian architecture, the Panopticon is a prison design where a single guard can watch every inmate, but the inmates never know when they’re being watched. It’s not so much “law and order” as it is “paranoia with plumbing.” Enter Andy Marsh, the head of the College of Policing and the man now pitching Britain’s very own Minority Report. According to the Telegraph, he’s proposing a new system that uses predictive analytics to identify and target the top 1,000 most dangerous men in the country. They’re calling it the “V1000 Plan,” which sounds less like a policing strategy and more like a discontinued vacuum cleaner. “We know the data and case histories tell us that, unfortunately, it’s far from uncommon for these individuals to move from one female victim to another,” said Sir Andy, with the tone of a man about to launch an app. “So what we want to do is use these predictive tools to take the battle to those individuals…the police are coming after them, and we’re going to lock them up.” I mean, sure, great headline. Go after predators. But once you start using data models to tell you who might commit a crime, you’re not fighting criminals anymore. You’re fighting probability. The government, always eager to blow millions on a glorified spreadsheet, is chucking £4 million ($5.39M) at a project to build an “interactive AI-driven map” that will pinpoint where crime might happen. Not where it has happened. Where it might. It will reportedly predict knife crimes and spot antisocial behavior before it kicks off. But don’t worry, says the government. This isn’t about watching everyone. A “source” clarified: “This doesn’t mean watching people who are non-criminals—but she [Mahmood] feels like, if you commit a crime, you sacrifice the right to the kind of liberty the rest of us enjoy.” That’s not very comforting coming from a government that locks people up over tweets. Meanwhile, over in Manchester, they’re trying out “AI assistants” for officers dealing with domestic violence. These robo-cop co-pilots can tell officers what to say, how to file reports, and whether or not to pursue an order. It’s less “serve and protect” and more “ask Jeeves.” “If you were to spend 24 hours on the shoulder of a sergeant currently, you would be disappointed at the amount of time that the sergeant spends checking and not patrolling, leading and protecting.” That’s probably true. But is the solution really to strap Siri to their epaulettes and hope for the best? Still, Mahmood remains upbeat: “AI is an incredibly powerful tool that can and should be used by our police forces,” she told MPs, before adding that it needs to be accurate. Tell that to Shaun Thompson, not a criminal but an anti-knife crime campaigner, who found himself on the receiving end of the Metropolitan Police’s all-seeing robo-eye. One minute, he’s walking near London Bridge, probably thinking about lunch or how to fix society, and the next minute he’s being yanked aside because the police’s shiny new facial recognition system decided he looked like a wanted man. He wasn’t. He had done nothing wrong. But the system said otherwise, so naturally, the officers followed orders from their algorithm overlord and detained him. Thompson was only released after proving who he was, presumably with some documents and a great deal of disbelief. Later, he summed it up perfectly: he was treated as “guilty until proven innocent.” Mahmood’s upcoming white paper will apparently include guidelines for AI usage. I’m sure all those future wrongful arrests will be much more palatable when they come with a printed PDF. Here’s the actual problem. Once you normalize the idea that police can monitor everyone, predict crimes, and act preemptively, there’s no clean way back. You’ve turned suspicion into policy. You’ve built a justice system on guesswork. And no amount of shiny dashboards or facial recognition cameras is going to fix the rot at the core. This isn’t about catching criminals. It’s about control. About making everyone feel watched. That was the true intention of the panopticon. And that isn’t safety; it’s turning the country into one big prison. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Britain’s AI Policing Plan Turns Toward Predictive Surveillance and a Pre-Crime Future appeared first on Reclaim The Net.

Congress Revives Kids Off Social Media Act, a “Child Safety” Bill Poised to Expand Online Digital ID Checks
Favicon 
reclaimthenet.org

Congress Revives Kids Off Social Media Act, a “Child Safety” Bill Poised to Expand Online Digital ID Checks

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Congress is once again positioning itself as the protector of children online, reviving the Kids Off Social Media Act (KOSMA) in a new round of hearings on technology and youth. We obtained a copy of the bill for you here. Introduced by Senators Ted Cruz and Brian Schatz, the bill surfaced again during a Senate Commerce Committee session examining the effects of screen time and social media on mental health. Cruz warned that a “phone-based childhood” has left many kids “lost in the virtual world,” pointing to studies linking heavy screen use to anxiety, depression, and social isolation. KOSMA’s key provisions would ban social media accounts for anyone under 13 and restrict recommendation algorithms for teens aged 13 to 17. Pushers of the plan say it would “empower parents” and “hold Big Tech accountable,” but in reality, it shifts control away from families and toward corporate compliance systems. The bill’s structure leaves companies legally responsible for determining users’ ages, even though it does not directly require age verification. The legal wording is crucial. KOSMA compels platforms to delete accounts if they have “actual knowledge” or what can be “fairly implied” as knowledge that a user is under 13. That open-ended standard puts enormous pressure on companies to avoid errors. The most predictable outcome is a move toward mandatory age verification systems, where users must confirm their age or identity to access social platforms. In effect, KOSMA would link access to everyday online life to a form of digital ID. That system would not only affect children. It would reach everyone. To prove compliance, companies could require users to submit documents such as driver’s licenses, facial scans, or other biometric data. The infrastructure needed to verify ages at scale looks almost identical to the infrastructure needed for national digital identity systems. Once built, those systems rarely stay limited to a single use. A measure framed as protecting kids could easily become the foundation for a broader identity-based internet. Cruz has said, “KOSMA meets parents where they’re at” and “holds Big Tech accountable to their terms of service.” Yet under this approach, parents would have less say over how their children use technology. A child sharing a parent’s YouTube account for educational videos could trigger account suspension if an algorithm infers the child’s age from a comment or viewing pattern. Instead of supporting family oversight, companies would be legally obligated to override it. The bill also connects to classroom policy. It would tie federal funding to removing phones and social media access in schools. Cruz argued that distributing tablets and laptops to students has made supervision harder and increased screen dependence. But tying device rules to federal funding could expand digital monitoring in education, where children’s data is already collected at unprecedented levels. This debate is less about children sneaking onto apps and more about how far the government should go in reshaping digital identity. The logic behind KOSMA leads directly to a verified, traceable internet where participation depends on proving who you are. KOSMA’s intentions may be framed as safety, but its mechanics point toward surveillance. Once identity checks become a prerequisite for online access, privacy becomes the exception instead of the norm. A society that links childhood safety to digital ID risks erasing the right to anonymity for everyone. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Congress Revives Kids Off Social Media Act, a “Child Safety” Bill Poised to Expand Online Digital ID Checks appeared first on Reclaim The Net.

Funding Freedom: One Censorship Blacklist at a Time
Favicon 
reclaimthenet.org

Funding Freedom: One Censorship Blacklist at a Time

Become a Member and Keep Reading… Reclaim your digital freedom. Get the latest on censorship, cancel culture, and surveillance, and learn how to fight back. Join Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post Funding Freedom: One Censorship Blacklist at a Time appeared first on Reclaim The Net.

Miami Beach Resident Questioned by Police After Facebook Post Criticizing Mayor Steven Meiner
Favicon 
reclaimthenet.org

Miami Beach Resident Questioned by Police After Facebook Post Criticizing Mayor Steven Meiner

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A confrontation over a Facebook comment has drawn attention after two Miami Beach police detectives appeared at a resident’s home to question her about remarks critical of Mayor Steven Meiner. Raquel Pacheco, who once ran for the Florida Senate as a Democrat and has been openly critical of Meiner, posted a comment on one of his social media updates alleging that the mayor “consistently calls for the death of all Palestinians, tried to shut down a theater for showing a movie that hurt his feelings, and REFUSES to stand up for the LGBTQ community in any way…” Shortly afterward, officers arrived at her residence. In a video she recorded, one detective cautioned her that such a statement “could potentially incite somebody to do something radical.” Police later clarified that the exchange was not tied to any criminal probe, but the encounter has raised concerns about policing free expression. In a letter addressed to Police Chief Wayne Jones, FIRE described the officers’ actions as “an egregious abuse of power” that “chills the exercise of First Amendment rights and undermines public confidence in the department’s commitment to respecting civil liberties and the United States Constitution.” Aaron Terr, Foundation for Individual Rights and Expression (FIRE)’s director of public advocacy, accused the department of using its authority to discourage lawful speech. “The purpose of their visit was not to investigate a crime. It had no purpose other than to pressure Pacheco to cease engaging in protected political expression over concern about how others might react to it,” Terr wrote. “This blatant overreach is offensive to the First Amendment.” FIRE’s letter urged the department to acknowledge publicly that Pacheco’s post is constitutionally protected and to ensure that “officers will never initiate contact with individuals for the purpose of discouraging lawful expression.” The organization also asked for copies of departmental rules and training materials dealing with police responses to protected expression, adding that the resident’s statement does not fit the legal definition of a “true threat.” Chief Jones, in a written response, maintained that the detectives acted appropriately and on his directive alone. “At no time did the Mayor or any other official direct me to take action,” he said, adding that his department “is committed to safeguarding residents and visitors while also respecting constitutional rights.” A police spokesperson confirmed that Meiner’s office had flagged the Facebook comment for review but declined to provide further details. Requests for additional records, including internal communications between the mayor’s office and the police, remain pending. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Miami Beach Resident Questioned by Police After Facebook Post Criticizing Mayor Steven Meiner appeared first on Reclaim The Net.

Discord Expands Age Verification ID System to More Regions
Favicon 
reclaimthenet.org

Discord Expands Age Verification ID System to More Regions

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Discord is pressing forward with government ID checks for users in new regions, even after a major customer-support breach in October 2025 exposed sensitive identity documents belonging to tens of thousands of people. The expansion of its age-verification system reflects growing pressure under the United Kingdom’s Online Safety Act, a law that effectively compels platforms to collect and process personal identification data in order to comply with its censorship and content-control mandates. The October 2025 incident highlighted exactly why such measures alarm privacy advocates. Around 70,000 Discord users had images of government-issued IDs leaked after attackers gained access to a third-party customer service system tied to the company. The hackers claim to have extracted as much as 1.6 terabytes of information, including 8.4 million support tickets and over 100 gigabytes of transcripts. Discord disputed the scale but admits the breach stemmed from a compromised contractor account within its outsourced Zendesk environment, not its own internal systems. Despite the exposure, Discord continues to expand mandatory age-verification. The company’s new “privacy-forward age assurance” program is now required for all UK and Australian users beginning December 9, 2025. Users must verify that they are over 18 to unblur “sensitive content,” disable message-request filters, or enter age-restricted channels. Verification occurs through the third-party vendors k-ID and, in some UK cases, Persona, which process either a government ID scan or a facial-analysis selfie to confirm age. More: Tea App Leak Shows Why UK’s Digital ID Age Verification Laws are Dangerous Discord says that data is deleted once the age group is confirmed and that selfies used for facial estimation never leave the device. The company insists this complies with new national laws such as the UK’s Online Safety Act and Australia’s Social Media Minimum Age Act, both of which impose legal obligations on platforms to block access to material deemed unsuitable for minors. Yet the system effectively normalizes document-based surveillance of everyday users, often without their direct consent to vendor storage. Persona, one of Discord’s verification partners, retains submitted data for up to seven days before deletion. The 2025 breach makes these government requirements look especially reckless. It demonstrated how fragile supposedly “privacy-protective” verification chains can be once multiple third-party vendors hold fragments of ID records. Government pressure to enforce identity verification has forced platforms like Discord to collect data that, once compromised, cannot be retrieved or anonymized. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Discord Expands Age Verification ID System to More Regions appeared first on Reclaim The Net.