Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

The Retreat of the Open Internet
Favicon 
reclaimthenet.org

The Retreat of the Open Internet

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post The Retreat of the Open Internet appeared first on Reclaim The Net.

DOJ Blocks France’s X Probe, Citing First Amendment
Favicon 
reclaimthenet.org

DOJ Blocks France’s X Probe, Citing First Amendment

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The US Justice Department has refused to help French prosecutors investigating X, sending Paris a two-page letter that amounts to a direct shot across the bow at European speech regulation. American authorities will not serve summonses, will not facilitate interviews, and will not lend their cooperation to what they describe as a foreign effort to prosecute a US company for editorial decisions protected at home. The letter, dated Friday and reviewed by the Wall Street Journal, came from the Justice Department’s Office of International Affairs. It rejected three separate French requests this year, and its language was unusually blunt. “This investigation seeks to use the criminal legal system in France to regulate a public square for the free expression of ideas and opinions in a manner contrary to the First Amendment of the United States Constitution,” the letter said. It went on to call the French requests “an effort to entangle the United States in a politically charged criminal proceeding aimed at wrongfully regulating through prosecution the business activities of a social media platform.” That is the Justice Department telling a European ally its prosecution is a speech case dressed up as a criminal case, and that the United States will not help build it. The Justice Department and French authorities did not respond to requests for comment. The French investigation began in January 2025, after a lawmaker and another official filed complaints arguing that X’s content-selection algorithm tilted toward Elon Musk’s views, and that the tilt amounted to foreign interference in France. The theory converts an editorial choice, which is what an algorithm is, into a potential crime. By July, prosecutors wanted access to the algorithm itself to examine it for bias. In November, the scope widened after reports of allegedly antisemitic posts, including Holocaust denial, which is illegal in France. In January of this year, prosecutors added the creation and distribution of child sexual abuse material and nonconsensual deepfakes to the list of potential charges, which was misleading at best. Investigators raided X’s Paris office in February. X called the search “an abusive act of law enforcement theater.” The platform is part of Musk’s artificial-intelligence firm xAI, which has now been purchased by his rocket company SpaceX. French officials then summoned Musk, former X chief executive Linda Yaccarino, and other employees for what they described as voluntary interviews. Musk’s summons was set for Monday. French prosecutors can issue arrest warrants for suspects who skip interviews, which makes the word “voluntary” do less work than it appears to. An xAI official welcomed the Justice Department’s intervention. “We are grateful to the Justice Department for rejecting this effort by a prosecutor in Paris to compel our CEO and several employees to sit for interviews,” the official said. “We hope the Parisian authorities will now come to their senses, recognize that there is no wrongdoing here, and terminate their baseless investigation.” The American pushback is important because the investigation is the sharp edge of a larger European project. Regulators across the continent are rolling out content-moderation rules with real teeth, and the Trump administration and other US officials have accused Europeans of trying to silence dissent not only on their continent but globally. Vice President JD Vance spent much of the year criticizing European speech restrictions in public speeches. Secretary of State Marco Rubio has flagged foreign prosecutions of Americans for online speech as a diplomatic concern. The letter from the Office of International Affairs turns that rhetoric into policy. What makes the French case useful to American officials is how exposed the speech-policing logic is. The investigation started with a politician unhappy about algorithmic favoritism. The serious charges, child sexual abuse material and deepfakes, were added later, on top of the original complaint. Tacking grave offenses onto a case that began as a complaint about algorithmic politics gives the prosecution cover and makes it harder to say out loud what the inquiry is actually about. The Justice Department said it anyway. The refusal also draws a line for other European governments watching. A prosecutor who wants to inspect a US platform’s recommendation algorithm for political bias, with criminal penalties attached, now knows the American government will not help deliver the paperwork. Every platform makes choices about what to amplify and what to bury. Those choices are speech. A legal theory that criminalizes the wrong choices turns algorithmic design into something prosecutors can punish after the fact, and the United States has just declined to assist in that project. The chilling effect is the reason any of this matters beyond X. A social media company that knows its algorithm can be subpoenaed, its executives summoned, and its Paris office raided will make different decisions about what to recommend and what to permit. The threat is enough. Actual convictions are not required for the behavior to change, which is why American authorities appear to have decided that refusing cooperation, publicly and in writing, is worth the diplomatic friction. The Justice Department’s refusal is narrow in a technical sense. It does not stop the French investigation. It does not prevent an arrest warrant if Musk declines to appear on Monday. What it does is put the United States on record as treating the prosecution as a speech case, refusing to let American mutual-assistance treaties be used to deliver Europeans the tools to punish American editorial decisions. For the transatlantic fight over who gets to set the rules of online speech, that is new territory. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post DOJ Blocks France’s X Probe, Citing First Amendment appeared first on Reclaim The Net.

Canada’s Carney Revives Online Censorship Bill
Favicon 
reclaimthenet.org

Canada’s Carney Revives Online Censorship Bill

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Canada’s Liberal government is preparing to revive legislation that would hand the state new powers over what Canadians can say online, with Prime Minister Mark Carney’s team signaling that a rebooted “online harms” law is coming. A report submitted to the Senate social affairs committee confirms the direction. The Department of Industry told senators that Ottawa is working toward a “future online safety regime” aimed at reducing online “harms,” a category the government itself gets to define. To shape the proposal, officials have brought back the Expert Advisory Group on Online Safety, the same body that helped design the previous censorship attempt. “To advise on this proposal, the government has recently reconvened the Expert Advisory Group on Online Safety, whose members previously contributed to the development of online harms legislation, to engage on new and emerging issues related to online harms,” the department said. “Any future legislative proposal would be subject to parliamentary scrutiny, and details will be made public at the appropriate time.” One of the members back at the table is Bernie Farber of the Canadian Anti-Hate Network. The advisory group helps shape what the government will treat as hateful, harmful, or dangerous. That definition, once written into law, determines which posts get deleted, which accounts get silenced, and which Canadians face fines or house arrest for saying the wrong thing online. Canadian Culture Minister Marc Miller telegraphed the timing this week, suggesting a new law targeting “online harms” is needed and likely coming soon. With the Liberals now holding a majority after three byelection wins and the defection of five MPs from the Conservatives and NDP, the procedural obstacles that killed previous attempts have largely disappeared. A social media ban for children is also on the table. The last attempt, Bill C-63, known as the Online Harms Act, was introduced under the familiar justification of protecting children from online exploitation. The bill died when former Prime Minister Justin Trudeau called the 2025 federal election. Its actual reach went well beyond child safety. It targeted lawful internet content that authorities deemed “likely to foment detestation or vilification of an individual or group,” wording broad enough to sweep up political argument, satire, religious commentary, and journalism, depending on who was reading it. Breaking the rule carried fines of up to $70,000 or house arrest. Before C-63 there was Bill C-36, a 2021 effort to amend the Criminal Code along similar lines. Neither bill made it through. Both kept returning in slightly different forms. The Justice Centre for Constitutional Freedoms, Canada’s leading constitutional freedom organization, has launched a national campaign urging the Carney government to abandon the project entirely. The JCCF warned that the Online Harms Act would “dramatically expand government censorship powers, punish lawful expression online, and authorize preemptive restrictions on individual liberty.” “In doing so, it would represent a fundamental departure from Canada’s long-standing commitment to freedom of expression and due process,” the organization said. Preemptive restrictions, the legal mechanism the previous bill contained, mean punishing or silencing someone before they have said anything unlawful. Canadian courts have historically treated prior restraint as the most serious form of speech suppression. The revived framework appears to contemplate it as a feature. The chilling effect is already setting in. Writers, commentators, and small publishers in Canada began adjusting what they posted during the C-63 debate, well before any law took effect. The threat alone was enough to quiet a portion of online political speech. A reintroduced bill, backed by a majority government and an advisory panel stacked with people who see the internet as a venue that needs controlling, makes that quieting louder. The Liberal government has said repeatedly that some version of Bill C-63 is coming back. What it has not said, in any substantive form, is who decides what counts as hate, what counts as harm, and what counts as the kind of speech a democracy is supposed to tolerate even when it finds it ugly. Those definitions will sit with the same government promising the law, and the same advisory group promising to help write it. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Canada’s Carney Revives Online Censorship Bill appeared first on Reclaim The Net.

How the KYC Mandate Became a Biometric Heist
Favicon 
reclaimthenet.org

How the KYC Mandate Became a Biometric Heist

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Regulators spent the last few years demanding that banks, crypto exchanges, and fintech apps collect face scans, ID photos, and biometric templates from every customer. They sold this as a defense against financial crime. What it actually built was a global inventory of the most sensitive identifiers a person has, stored across thousands of corporate databases, waiting to be breached. Now the stolen contents of those databases power a thriving economy that lets money launderers walk through the front door of major banks wearing someone else’s face. The push toward biometric access and digital ID verification did not stop the fraud. It supplied the raw material for it. A video reviewed by MIT Technology Review shows the consequence in miniature. Somewhere inside a Cambodian scam compound, a worker opens a popular Vietnamese banking app, taps through the login flow, and reaches the liveness check. He holds up a static photo of a woman who looks nothing like the 30-something Asian man whose account he’s accessing. The app asks him to adjust the face in the frame. Ninety seconds later, he’s in. That demonstration arrived from Hieu Minh Ngo, a former hacker turned cybersecurity advisor to the Vietnamese government who now investigates money laundering for an anti-scam nonprofit. The exploit he shared uses a virtual camera, a tool that replaces a phone’s live video feed with whatever the operator wants to show, whether a stolen photo, a deepfake, or a cardboard cutout of a stranger. Banks designed liveness checks to confirm a real person is sitting behind the screen. Virtual cameras make that confirmation meaningless. The facial templates feeding those cameras had to come from somewhere. They came from the KYC files that regulators required banks and exchanges to build. Face scans, passport photos, and liveness videos, once submitted to open an account, do not disappear. They accumulate in corporate archives governed by retention policies most users never read, and they leak through breaches, insider theft, and vendor compromises into the same Telegram marketplaces that sell the bypass kits. Your face is your face. Your passport photo is your passport photo. Once those templates circulate online, they circulate forever, feeding every future fraud attempt against every future KYC system you encounter. MIT Technology Review spent two months earlier this year cataloging that marketplace. The result was 22 public Telegram channels and groups operating in Chinese, Vietnamese, and English, hawking bypass kits and stolen biometric data to anyone willing to pay. Some had thousands of subscribers. The bio of the program used by the Cambodian launderer read, “Specializing in bank services—handling dirty money,” finished with a thumbs-up emoji. “Secure. Professional. High quality.” The channels advertised services with bullet points like “All kinds of KYC verification services” and “It’s all smooth and seamless,” paired with videos purporting to show real bypasses. Governments and financial institutions describe the underlying data hoarding as the price of stopping financial crime. The actual ledger looks different. Compliance requirements produced centralized collections of biometric data that cannot be changed after a leak. Passwords can be rotated. Credit cards can be reissued. Faces cannot. Every mandate that pushes more institutions to collect more biometric templates from more users expands the pool of permanently compromised identity data without giving anyone a way to claw it back. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post How the KYC Mandate Became a Biometric Heist appeared first on Reclaim The Net.

Brussels’ New Age Verification App: Hacked in Two Minutes
Favicon 
reclaimthenet.org

Brussels’ New Age Verification App: Hacked in Two Minutes

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The European Union’s age verification app arrived on Wednesday with a promise that it was “technically ready” for deployment across the bloc. Within hours, security researchers had torn it apart. Commission President Ursula von der Leyen presented the tool in Brussels as the answer to a continent-wide push to keep minors off social media and adult websites. “It is fully open source. Everyone can check the code,” von der Leyen said. Researchers took her at her word. What they found has turned the launch into exactly the kind of security embarrassment that should make anyone think twice about digital identity systems. Security consultant Paul Moore published a widely shared post on X documenting what he discovered after examining the GitHub repository. The app stores sensitive data on users’ phones and leaves it unprotected. Moore claimed he hacked it in under two minutes. Brussels is standing by its product. “Yes, it is ready. Maybe we can add, ‘and it can always be improved’,” Chief Spokesperson Paula Pinho told reporters Friday. Digital spokesperson Thomas Regnier added a revealing clarification. “Now, when we say it’s a final version, it’s … still a demo version.” He said the final product is not yet available for citizens and “the code will be constantly updated and improved … I cannot today exclude or prejudge if further updates will be required or not.” Moore led the technical takedown on X, describing the app’s architecture as broken at the foundation. The encrypted PIN the app stores locally, according to Moore, has no cryptographic link to the identity vault holding the actual verification data. That gap enables a bypass that requires no exploit code or specialized tools. Delete a few specific values from the app’s configuration files, restart the app, set a new PIN, and the software happily hands over access to credentials that belong to the previous profile. Identity data gets reused under whatever access control the attacker defines. The weaknesses deepen from there. Rate limiting, the standard defense against someone trying PIN after PIN until one works, lives in the same editable configuration file as a plain counter. Set it to zero and the app forgets every failed attempt. The app’s failures extend past its bypass-friendly PIN system to something arguably more alarming for anyone who uploads a government ID. Identity documents processed through the app are not stored in encrypted form on the device, meaning the photos of passports, national IDs, and other verification images sit in accessible storage where any attacker with file access can pull them directly. Encryption at rest is standard practice for applications handling sensitive personal data. Banking apps do it. Password managers do it. Messaging apps do it for messages far less sensitive than a scanned passport. The EU’s age verification app, built specifically to handle government identity documents at scale, apparently does not. Biometric authentication is governed by a single true-or-false flag sitting in user-accessible storage. Switch it to false and the app skips fingerprint and face checks entirely. None of this requires breaking encryption or defeating hardware security. It requires a basic text editor. Moore did not mince words about where this leads. “Seriously, Von der Leyen – this product will be the catalyst for an enormous breach at some point. It’s just a matter of time,” he wrote. Developers responding to the teardown pointed out that modern smartphones ship with hardware specifically designed to prevent exactly this kind of tampering. “Why did they not use the secure enclave?” one asked, referring to the isolated cryptographic processors Apple and Android devices use to protect sensitive authentication data from the rest of the operating system. The EU app stores its security controls in plain configuration files that any user with file access can modify. Other responses questioned the app’s basic premises. Why should age verification expire? Why cap the number of times someone can verify? “Why does proof of age have an expiration date? Once I’m over 18, I will always be over 18. I’m not turning any younger!” one developer wrote. The questions point to a system designed not just to confirm age but to track verification events over time, building a log of when and where citizens proved themselves to online services. Telegram CEO Pavel Durov jumped in with a sharper reading of the situation. Writing on his Telegram channel, Durov suggested the app’s vulnerabilities may be the feature rather than the bug. “Their age verification app was hackable by design — it trusted the device,” he wrote, calling trust in the device “instant game over” from any serious security standpoint. Durov sketched out what he sees as the likely trajectory. “Present a ‘privacy-respecting’ but hackable app… get hacked… remove privacy to ‘fix’ the app,” he wrote, describing the eventual outcome as “a surveillance tool sold as privacy-respecting.” The Telegram founder argued that this week’s breach revelations hand regulators exactly the justification they need for the next round of expansion. “Today’s ‘surprising hack’ just handed this excuse to them,” he wrote. Durov has been consistent on this point. He previously described Spain’s plans for mandatory social media age verification as a “dangerous new regulation and a doorway to public surveillance and mass data collection,” and sent a direct message to every Telegram user in Spain attacking Prime Minister Pedro Sánchez’s proposed under-16 social media ban. The Spanish government responded by accusing Durov of spreading lies and undermining democratic institutions. Every one of these systems builds the same thing. A centralized or federated database of identity information tied to real people, accessed constantly, updated constantly, and breached eventually. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Brussels’ New Age Verification App: Hacked in Two Minutes appeared first on Reclaim The Net.