Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

The UK’s Plan to Put an Age Verification Chaperone in Every Pocket
Favicon 
reclaimthenet.org

The UK’s Plan to Put an Age Verification Chaperone in Every Pocket

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. UK officials are preparing to urge Apple and Google to redesign their operating systems so that every phone and computer sold in the country can automatically block nude imagery unless the user has proved they are an adult. The proposal, part of the Home Office’s upcoming plan under the premise of combating violence against women and girls, would rely on technology built directly into devices, with software capable of scanning images locally to detect material. Under the plan, as reported by FT, such scanning would be turned on by default. Anyone wanting to take, send, or open an explicit photo would first have to verify their age using a government-issued ID or a biometric check. The goal, officials say, is to prevent children from being exposed to sexual material or drawn into exploitative exchanges online. People briefed on the discussions said the Home Office had explored the possibility of making these tools a legal requirement but decided, for now, to rely on encouragement rather than legislation. Even so, the expectation is that large manufacturers will come under intense pressure to comply. The government’s approach reflects growing anxiety about how easily minors can access sexual content and how grooming can occur through everyday apps. Instead of copying Australia’s decision to ban social media use for under-16s, British ministers have chosen to focus on controlling imagery itself. Safeguarding minister Jess Phillips has praised technology firms that already filter content at the device level. She cited HMD Global, maker of Nokia phones, for embedding child-protection software called HarmBlock, created by UK-based SafeToNet, which automatically blocks explicit images from being viewed or shared. Apple and Google have built smaller-scale systems of their own. Apple’s “Communication Safety” function scans photos in apps like Messages, AirDrop, and FaceTime and warns children when nudity is detected, but teens can ignore the alert. Google’s Family Link and “sensitive content warnings” work similarly on Android, though they stop short of scanning across all apps. Both companies allow parents to apply restrictions, but neither has a universal filter that covers the entire operating system. The Home Office wants to go further, calling for a system that would block any nude image unless an adult identity check has been passed. More: UK Lawmakers Propose Mandatory On-Device Surveillance and VPN Age Verification Officials have also indicated that desktop computers could eventually be included, noting that products such as Microsoft Teams already screen for content. Privacy and data rights advocates have raised alarms about the implications of linking biometric verification to content scanning. Although the analysis could happen locally on the device, it would still involve the system continuously examining personal photos and videos. Such a setup could move ordinary private devices toward a model of constant surveillance, with algorithms monitoring every image a person creates. There are also questions about enforcement and reliability. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post The UK’s Plan to Put an Age Verification Chaperone in Every Pocket appeared first on Reclaim The Net.

UK Parliament Rejects Petition to Repeal Online Censorship Law, Calls for Expanded Censorship
Favicon 
reclaimthenet.org

UK Parliament Rejects Petition to Repeal Online Censorship Law, Calls for Expanded Censorship

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. This week in the UK, Parliament held a debate in response to a public petition that gathered hundreds of thousands of signatures calling for the repeal of the Online Safety Act (OSA). It was a rare opportunity for elected officials to prove they still listen to their constituents. Instead, the overwhelming message from MPs was clear: thanks for your concern, but we’d actually like even more control over what you can do online. One by one, MPs stood up not to defend free expression, or question whether one of the most radical internet control laws in modern British history might have gone too far, but to argue that it hadn’t gone far enough. “It’s Not Censorship, It’s Responsibility” (Apparently) Lizzi Collinge, Labour MP for Morecambe and Lunesdale, insisted the OSA “is not about controlling speech.” She claimed it was about giving the online world the same “safety features” as the offline one. This was a recurring theme throughout the debate: reassure the public that speech isn’t being restricted while calling for more mechanisms to restrict it. Ian Murray, Minister for Digital Government and Data, also insisted the OSA protects freedom of expression. According to him, there’s no contradiction in saying people can speak freely, as long as they’re age-verified, avoid VPNs, and don’t say anything that might be flagged by a government regulator. It’s a neat trick. Say you support free speech, then build an entire law designed to monitor, filter, and police it. VPNs in the Firing Line There is a growing fixation inside government with VPNs. These are basic privacy tools used by millions of people every day, often to protect their data. But several MPs, including Jim McMahon, Julia Lopez, and Ian Murray, suggested VPNs should be subject to age verification or regulatory restrictions. It’s unclear whether these MPs understand how VPNs work or if they simply dislike the idea of anyone browsing the internet without supervision. Either way, the intent is clear. The government wants fewer ways for people to browse anonymously. More: From Madison to Moscow: How VPNs Work and Why Governments (Despite Trying) Can’t Stop Them The AI Panic Button Several MPs were clearly rattled by the existence of AI chatbots and called for new censorship powers to rein them in. Manuela Perteghella warned that the OSA “leaves a significant gap” around generative AI, claiming children are at risk from private conversations with bots. Ann Davies said the government wasn’t moving quickly enough to regulate this emerging technology. Lola McEvoy, meanwhile, called for bots to be labelled clearly so users would know when they’re talking to a machine. She also demanded stronger age verification. The idea that every website should identify bots like they’re wearing a hi-vis jacket is a perfect metaphor for how Parliament thinks the internet should work. Censorship as Cure-All Jim McMahon gave the clearest call for wider censorship. He argued the current OSA doesn’t do enough to tackle foreign influence, misinformation, racism, misogyny, and hate. He also claimed major platforms are suppressing “mainstream opinion” in favor of falsehoods. Emily Darlington, MP for Milton Keynes Central, joined the call for greater platform moderation. She said platforms should be able to remove false claims, even offering a bizarre example of someone saying she has pink eyes. “Somebody could post that I am actually purple and have pink eyes,” she said. “I would say, ‘I don’t want you to say that,’ and the platform would say, ‘But there’s nothing offensive about it.’ I would say, ‘But it’s not me.’ The thing is that this is happening in much more offensive ways.” Her larger point was that online slander should be taken down by force if necessary. She also supported end-to-end encryption backdoors, which would allow private messages to be scanned before being sent. A Lone Voice in the Wilderness Lewis Atkinson, Labour MP for Sunderland Central, did raise some concerns. He said he spoke with petition creator Alex Baynham and acknowledged the chilling effect of the OSA on small forums and community websites. He noted that 300 forums had already shut down or migrated to larger platforms like Facebook because of legal risk. He mentioned a Sunderland AFC message board admin who almost closed the site due to the overwhelming volume of guidance from Ofcom. But even with all this evidence in front of him, Atkinson couldn’t bring himself to support repeal. Instead, he hedged, suggesting reform would be more realistic than repeal. He backed several of the OSA’s key features, including stronger age verification. What This Debate Actually Revealed This was not a debate in any meaningful sense. It was a full-throated defense of a law that is already doing damage to online spaces and a promise that more is coming. MPs didn’t engage seriously with the petition. They didn’t question whether the OSA is overreaching or whether it might be driving smaller forums offline. They mostly ignored the fact that the law makes the internet harder to navigate unless you’re a large company with a legal department. Instead, they said the OSA is working well and that it needs more teeth. They want to control AI, crack down on VPNs, regulate encryption, and force companies to implement more age verification. The public might be concerned about censorship, surveillance, and freedom of expression. But the mood in Westminster is very different. They don’t want to repeal the OSA. In fact, their attack on civil liberties is only just getting started. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post UK Parliament Rejects Petition to Repeal Online Censorship Law, Calls for Expanded Censorship appeared first on Reclaim The Net.

Faith on Trial in Canada as Parliament Moves to Rewrite the Rules of Speech
Favicon 
reclaimthenet.org

Faith on Trial in Canada as Parliament Moves to Rewrite the Rules of Speech

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A Canadian parliamentary committee has set in motion a change that could recast the balance between expression and state control over “hate speech.” Members of the House of Commons Justice and Human Rights Committee voted on December 9 to delete a longstanding clause in the Criminal Code that shields religious discussion made in good faith from prosecution. The decision forms part of the government’s Combating Hate Act (Bill C-9), legislation that introduces new offences tied to “hate” and the public display of certain symbols. The focus is on Section 319(3)(b), which currently ensures that “no person shall be convicted of an offence under subsection (2)…if, in good faith, the person expressed or attempted to establish by an argument an opinion on a religious subject or an opinion based on a belief in a religious text.” That safeguard would vanish if the Bloc Québécois amendment approved this month survives the remaining stages of debate. Liberal MPs backed the Bloc’s proposal, which Bloc MP Rhéal Éloi Fortin introduced after his party leader, Yves-François Blanchet, made its passage a precondition for Bloc support of the bill. Fortin argued that the religious exemption could permit “someone could commit actions or say things that would otherwise be forbidden under the Criminal Code.” The amendment was adopted during a marathon session that came only after the committee chair, Liberal MP James Maloney, abruptly ended an earlier meeting and canceled the next one to allow MPs time to “regroup.” On December 9, the committee returned for an eight-hour clause-by-clause review, with government members determined to complete key sections of the bill before the winter recess. The broader legislation targets intimidation around religious institutions and bans the display of defined “hate” and “terrorism” symbols. Yet most debate now centers on whether the change to Section 319(3)(b) opens the door to criminal proceedings against clergy or believers discussing moral or scriptural teachings. As reported by The Catholic Register, Justice Minister and Attorney General Sean Fraser alleged that the measure poses no threat to religious freedom. “The amendment that the Bloc is proposing will … in no way, shape or form prevents a religious leader from reading their religious texts,” Fraser said. “It will not criminalize faith.” That assurance was immediately challenged. Conservative MP Garnett Genuis warned that relying on constitutional guarantees alone offers no protection from poorly written laws. “(It’s) as if the existence of the Charter establishes some law of physics, which prevents legislation from passing that violates it,” said Genuis. “That’s not how the Charter works. The way the Charter and constitutional protections on religious freedom and other rights work is that laws can still be passed to violate those rights, and those laws are enforced until they’re struck down until a judicial process intervenes.” Fraser later posted on X that he would meet with community groups to emphasize that “good-faith religious expression will remain fully protected.” Religious and civil rights organizations say the removal of Section 319(3)(b) would leave clergy and lay believers vulnerable to politically motivated complaints. The Canadian Conference of Catholic Bishops and Toronto’s Cardinal Frank Leo both submitted letters to Parliament acknowledging their rejection of hatred and prejudice but warning that removing the defense introduces “uncertainty for clergy, educators and all people of faith who seek to pass on the teachings of the Church with charity and integrity.” The Catholic Civil Rights League (CCRL) also expressed unease. The League noted that Bill C-9 also abolishes a separate requirement for the Attorney General’s consent before any hate-propaganda prosecution begins. Without that filter, it warned, the courts could become an instrument for harassment, removing the safeguard “will likely result in spurious or targeted attacks on individuals expressing Christian moral teachings.” Together with the bishops, the CCRL said these “developments may create a climate of fear for good faith expressions of religious belief and expose Church and faith leaders to criminal charges by anyone seeking to pursue a charge to advance a non-religious viewpoint.” If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Faith on Trial in Canada as Parliament Moves to Rewrite the Rules of Speech appeared first on Reclaim The Net.

Australia Expands Online Censorship and Antisemitism Controls After Bondi Beach Terror Attack
Favicon 
reclaimthenet.org

Australia Expands Online Censorship and Antisemitism Controls After Bondi Beach Terror Attack

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Government officials in Australia have moved to tighten their grip on online discussion following the Bondi Beach terror attack, urging citizens and tech platforms to suppress footage and commentary deemed distressing.  Communications Minister Anika Wells and eSafety Commissioner Julie Inman Grant both directed attention toward “violent, harmful or distressing” posts, calling for social media users to report such content and for companies to act swiftly in taking it down. In a Facebook post, Wells announced that eSafety had “activated its on-call team to monitor what’s being shared online.”  She appealed to users to “report graphic material to the platform to help get it removed quickly” and to alert eSafety if “it is not removed, or if it’s seriously harmful.”  Shortly after, eSafety echoed the same message through its own post, repeating the instruction to report content to platforms and to the regulator itself. A separate statement, shared by journalist Cameron Wilson, confirmed that eSafety had “received multiple complaints about online material showing footage of the mass shooting at Bondi” but that this content “has not met the threshold for Class 1 material under the Online Safety Act.”  Under the 2021 Online Safety Act, a broad censorship law granting the commissioner broad takedown powers, “Class 1” status determines whether eSafety can compel platforms to remove material entirely.  The regulator added that it would “continue to work with platforms and services to ensure they meet their obligations under Australian law,” leaving open the option that “further actions may be considered.” Those powers, however, are under increasing legal pressure. The Free Speech Union (FSU) of Australia has formally requested that eSafety provide copies of any Section 109 notices it has issued or plans to issue, documents that authorize the removal of “Class 1” material.  The FSU has warned the regulator that “it can expect it to be challenged” if such orders are made. eSafety’s authority has already been tested in the courts.  In 2024, a federal judge rejected its effort to maintain a global blocking order on graphic footage from a Sydney church stabbing, ruling instead that X’s choice to limit access inside Australia through geo-blocking was a reasonable approach. Another dispute is still before the courts. In October 2025, the FSU of Australia filed a case questioning the legality of eSafety’s directives to remove or geo-block footage of the killing of Iryna Zarutska. The group argues that these orders cut Australians off from viewing material of genuine news value. The FSU of Australia has formally written to the eSafety Commissioner’s Office demanding transparency over future and past censorship directives issued under Section 109 of the Online Safety Act 2021 In the letter dated 15 December 2025, addressed through the Australian Government Solicitor, FSU stated that it was acting “to ensure that the horrors of yesterday’s terrorist attack in Bondi Beach against the Jewish community are not hidden or censored.”  We obtained a copy of the letter for you here. The group referred to eSafety’s recent acknowledgment of an earlier wrongful censorship order, noting that “on Friday 12 December 2025, your client indicated they accepted the recent decision of the Classification Review Board in eSafety INV-202505242 concerning the murder of Iryna Zartuska which your client now accepts that they wrongly issued a Section 109 notice in respect of.” That decision, according to the FSU’s letter, included the Board’s finding that “the film is a factual record of a significant event that is not presented in a gratuitous, exploitative or offensive manner to the extent that it should be classified RC.”  The FSU also accused eSafety of procedural misconduct, writing, “We note that your client misled ourselves and the Tribunal about the existence of those parallel proceedings.” The Union is now seeking a formal undertaking that eSafety will “promptly provide to the Free Speech Union of Australia copies of any further Section 109 (Online Safety Act 2021(Cth)) notices she issues in the future, including a copy of the censored content.”  It further insisted that “these notices should identify the decision maker in question” and that “Ms Inman Grant will take personal responsibility for any such notices issued, rather than using any delegate.” Following the Bondi Beach terror attack, the Australian government has turned sharply toward expanding digital surveillance and online content control under the banner of combating antisemitism.  Jillian Segal, Australia’s Special Envoy to Combat Antisemitism, has urged an immediate acceleration of her July recommendations to the government, which include extensive measures targeting speech, algorithms, and online anonymity.  She told Guardian Australia that “calling it out is not enough” and insisted that “we need a whole series of actions that involve the public sector and government ministers, in education in schools, universities, on social media and among community leaders… It has got to be a whole society approach.” Prime Minister Anthony Albanese publicly embraced Segal’s appeal, promising to dedicate “every single resource required” to eliminate antisemitism.  His office confirmed that several of Segal’s proposals are already under implementation or review, including those directed at digital communication, social media platforms, and emerging technologies such as AI. The proposals, set out in Segal’s Plan to Combat Antisemitism, seek to harden legal and technical mechanisms to restrict online speech.  Among them are recommendations to strengthen hate crime legislation to address “antisemitic and other hateful or intimidating conduct, including with respect to serious vilification offences and the public promotion of hatred and antisemitic sentiment,” and to establish a national database of “antisemitic hate crimes and incidents.”  The plan also calls for the “broad adoption of the International Holocaust Remembrance Alliance’s working definition of antisemitism… across all levels of government and public institutions.” This particular definition has drawn criticism internationally for its potential use in classifying political commentary about Israel as antisemitic. A large portion of Segal’s framework deals specifically with digital regulation.  It proposes “regulatory parameters for algorithms” to “prevent the amplification of online hate,” mandates that social media platforms “reduce the reach of those who peddle hate behind a veil of anonymity,” and calls for “considering new online censorship and age verification laws” modeled on the UK’s Online Safety Act and the EU’s Digital Services Act.  Segal’s plan further insists on “ensuring that AI does not amplify antisemitic content,” effectively linking artificial intelligence moderation systems to government oversight of acceptable expression online. Another element, described as a national security measure, seeks to “screen visa applicants for antisemitic views or affiliations” and deny entry to individuals for “antisemitic conduct and rhetoric.” These measures amount to a major proposal for central oversight of both digital platforms and individual speech. While the government frames the package as a response to violent extremism, it would substantially expand censorship authority across online environments, media, and AI systems. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Australia Expands Online Censorship and Antisemitism Controls After Bondi Beach Terror Attack appeared first on Reclaim The Net.

700Credit Data Breach Exposes 5.6 Million Americans, Showing Risks of Centralized Digital ID Systems
Favicon 
reclaimthenet.org

700Credit Data Breach Exposes 5.6 Million Americans, Showing Risks of Centralized Digital ID Systems

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The breach at 700Credit has once again shown how fragile centralized identity systems can be and why the growing push for digital ID systems is so reckless. The company, which provides credit and identity verification tools to auto dealerships across the United States, confirmed that personal data belonging to at least 5.6 million people was stolen in an October cyberattack. More: Discord Support Data Breach Exposes User IDs, Personal Data In a notice on its website, 700Credit described the attacker as an “unidentified bad actor.” The stolen data, gathered from dealerships between May and October 2025, included names, addresses, dates of birth, and Social Security numbers. These are the same details that financial institutions across the country rely on to verify identity. More: The Digital ID and Online Age Verification Agenda Michigan Attorney General Dana Nessel urged residents to take the company’s outreach seriously. “If you get a letter from 700Credit, don’t ignore it,” she said. “It is important that anyone affected by this data breach takes steps as soon as possible to protect their information. A credit freeze or monitoring services can go a long way in preventing fraud, and I encourage Michiganders to use the tools available to keep their identity safe.” 700Credit stated that it is sending written notices to affected individuals and offering credit monitoring services. But the larger issue runs deeper. Companies that handle identity verification store enormous collections of highly sensitive data, leaving individuals exposed when that information is compromised. Once stolen, identifiers like Social Security numbers cannot be replaced, and their misuse can continue for years. This breach shows a structural weakness in digital identity systems. Centralized databases may simplify verification for businesses, yet they also create a single point of failure for millions of people. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post 700Credit Data Breach Exposes 5.6 Million Americans, Showing Risks of Centralized Digital ID Systems appeared first on Reclaim The Net.