Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

France’s ID Portal Hacked: 19 Million Records Up for Sale
Favicon 
reclaimthenet.org

France’s ID Portal Hacked: 19 Million Records Up for Sale

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. French authorities have added another case study to the growing argument against centralizing citizen identity data. France Titres, formerly known as ANTS, operates the portal where residents apply for passports, national ID cards, residence permits, driver’s licenses, and vehicle registrations. On April 15, something broke inside that system. A week later, the Interior Ministry confirmed what anyone watching digital ID schemes has been saying about this exact architecture for years, and the scale on offer from the attacker makes the warning harder to wave away. A threat actor using the aliases “breach3d” and “ExtaseHunters” appeared on criminal forums on April 16, claiming to have stolen between 18 and 19 million records from the agency’s internal systems. If accurate, that is roughly a third of France’s population sitting in a for-sale listing. The seller describes the haul as a fresh, structural compromise rather than a recycled dump, and is actively shopping it. Early French press reports, including Le Figaro, initially pegged the figure at around 12 million accounts before later estimates climbed. The government has not confirmed any number. What the ministry has confirmed is a “security incident that may involve the disclosure of data from both individual and professional accounts.” Login credentials, full names, email addresses, dates of birth, unique account identifiers, postal addresses, places of birth, and phone numbers may all have been extracted. That combination is a starter kit for identity fraud, synthetic identity construction, and convincing phishing attacks against people who already expect email from French government domains. The reassurances arrived on schedule. “The disclosure of data does not include additional data submitted during the various procedures, such as attachments,” the notice stressed. “This personal data does not allow unauthorized access to the portal account.” Both statements may be accurate. Neither softens the reality that a government agency holding some of the most sensitive identifiers a person possesses has just lost control of a meaningful portion of them, with no disclosed user count and no attribution to any attacker. The ministry has not said how many people are affected. It has not said who did it. It has not said how they got in. What it confirmed is that an investigation is running and that additional security measures have been put in place to keep the portal operating and improve data protection. Tightening the locks after the data has already left the building is a partial comfort at best. A state that cannot keep the contents of its secure document portal secure is the same state currently pushing for backdoor access to end-to-end encrypted services and mandatory digital IDs for platform users. The pipeline from policy to breach disclosure is short. This is the structural failure mode of national-scale digital identity. France Titres was not built as a surveillance tool. It was built to make bureaucracy function. The outcome is indifferent to intent. Consolidating the documents that define a citizen’s legal existence into one portal creates one target, and the value of that target grows with every data field the state decides to demand. A breach of France Titres is not a breach of a retail site. It is a breach of the infrastructure of French legal identity itself. The incident fits into a pattern that has become hard to overlook. Last week, France’s Education Ministry disclosed that attackers had pulled student data from the ÉduConnect platform after compromising a staff account in late 2025. In February, intruders reached into France’s National Bank Accounts File, exposing information tied to roughly 1.2 million bank accounts out of more than 300 million entries. Earlier this year, cybercriminals made off with 15.8 million medical records from a French doctors’ ministry service. Four separate government-held databases, four separate failures, all involving records that citizens had no meaningful option to withhold. The useful question is not whether France Titres will improve its defenses. It probably will. The question is why a government that has shown, repeatedly, that it cannot reliably protect data of this sensitivity keeps expanding the categories of data it demands from citizens, and keeps lobbying for access to data it does not yet hold. Proponents of digital identity like to call these systems efficient and modern. The France Titres breach is a useful translation of what modern actually means here. It means the personal records that once sat in regional offices, on paper, inside locked filing cabinets, now live in databases reachable from anywhere on the internet by anyone resourceful enough to find a way in, and up for sale to anyone willing to pay for them. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post France’s ID Portal Hacked: 19 Million Records Up for Sale appeared first on Reclaim The Net.

Japan Jails a Man for Publishing Movie Spoilers
Favicon 
reclaimthenet.org

Japan Jails a Man for Publishing Movie Spoilers

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A Tokyo court just sent a man to prison for writing about movies in too much detail. The Tokyo District Court convicted 39-year-old Wataru Takeuchi of copyright infringement and handed down an 18-month prison sentence plus a 1 million yen ($6,296.16) fine. His offense was running a website that published detailed, spoiler-heavy write-ups of popular films and series. Two pieces triggered the lawsuit, one about Godzilla Minus One and another covering the Overlord anime adaptation. Toho and Kadokawa Shoten brought the case jointly through the Content Overseas Distribution Association, known as CODA. The Japanese law Takeuchi violated prohibits creating “a new work by making creative modifications to the original while preserving its essential characteristics.” What counts as preserving “essential characteristics” is exactly the kind of vague standard that gives prosecutors wide latitude to decide which writers get charged and which don’t. Takeuchi didn’t even write the offending posts himself. He administered the site. That was enough for prison time. CODA’s case rests on an expansive theory. The organization argues that combining transcribed dialogue, scene descriptions, and press images creates something functionally equivalent to watching the film, and that this discourages paying customers. “Numerous websites that extract text from movies and other content have been identified and are considered problematic as so-called ‘spoiler sites,'” CODA said. “While these actions tend to be perceived as less serious than piracy sites or illegal uploads that upload the content itself, they are clear copyright infringements that go beyond the scope of fair use and are serious crimes.” CODA acknowledges fair use exists, then defines any sufficiently thorough description as falling outside it. The line between legitimate commentary and criminal infringement becomes a judgment call made by rights holders and prosecutors, after publication, with prison as the penalty. Takeuchi’s site made money. That appears to have done a lot of the lifting in the prosecution. In 2023, ad revenue reportedly brought in 38 million yen ($239,254.04). Monetization is the hook copyright enforcement loves because it strips away any pretense that the writer was engaging with the work for its own sake. But the logic cuts further than anyone involved seems willing to admit. Most professional entertainment journalism runs ads. Most reviews and recaps describe the plot. The question isn’t whether Takeuchi’s site was tasteful, it’s whether the Japanese state should be deciding how much description is too much, and then jailing people who get it wrong. The chilling effect writes itself. Every entertainment writer in Japan now has to guess where the line sits between acceptable coverage and an 18-month sentence. The line isn’t drawn by statute. It’s drawn by CODA, by the studios, by whichever prosecutor takes the next case. Writers who can afford lawyers will play it safe. Writers who can’t will either stop writing or hope no one notices. CODA has made clear this isn’t a one-off. The organization said it plans to “strive for the proper protection of copyrights and implement effective measures against similar websites.” If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Japan Jails a Man for Publishing Movie Spoilers appeared first on Reclaim The Net.

Turkey to Ban Anonymous VPNs
Favicon 
reclaimthenet.org

Turkey to Ban Anonymous VPNs

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Turkey is moving to make anonymous VPN use illegal, and Proton VPN signups in the country have doubled as word spreads. The Turkish government’s plan, reported by local outlet Yeni Şafak, would outlaw unlicensed VPN services and require any approved provider to log what users do and turn those records over to Turkish authorities on request. A VPN that logs and reports isn’t really a VPN. It’s a second surveillance pipe pointed at the same people the government already watches. Officials describe the measures as part of a package aimed at protecting children after school attacks in Şanlıurfa and Kahramanmaraş, with attackers reportedly drawn to violent mobile games. Packaged alongside the VPN clampdown are parent-controlled “child SIM” lines and a cap on how many mobile numbers a single person can register. The child-protection wrapper is the sweetener, because the actual infrastructure being built, licensed VPN providers that log and disclose, reaches every adult in the country, not just children playing shooters on their phones. Proton VPN General Manager David Peterson confirmed the signup spike and said the company is seeing connection blocks too, particularly on Vodafone. His guidance to Turkish users was practical rather than political. Turn on Proton’s Stealth protocol, which disguises VPN traffic as ordinary internet traffic so it slips past filters. Switch on Alternative Routing, which reroutes connections when the usual paths are blocked. If the Proton VPN website itself is unreachable, the Android and iOS apps remain available through Google Play and the App Store, and the clients are also hosted on GitHub. None of this is new territory for Turkey. The country has a history of internet shutdowns and targeted blocks, and Proton VPN has been one of 27 providers whose websites are already restricted there. In August 2024, Turkish ISPs moved against a raft of VPN providers and Proton recorded a 4,500% spike in signups. Last March, after the arrest of Istanbul Mayor Ekrem İmamoğlu and the throttling of major social platforms, signups jumped 1,100% over baseline. Vodafone Turkey, which controls roughly a third of the country’s mobile internet, has shown up repeatedly in these episodes, with Proton tracing past outages to carrier-level DNS manipulation rather than genuine technical faults. What the licensing proposal would add is a legal ceiling on escape. Right now, Turkish users can route around blocks with an unapproved VPN and keep their browsing off the state’s books. A licensing regime closes that door by design. The only VPNs left standing would be the ones that agreed to keep records and hand them over. Anyone using something unlicensed would be breaking the law. The same population that turned to VPNs for anonymity would find that anonymity is criminalized. The privacy cost lands in two places. The first is obvious. Approved VPNs that log become a searchable history of what every Turkish user did online, who they talked to, what they read, and where they routed their traffic from. Second, once a licensing regime exists, the government gets to decide which providers qualify, and providers that refuse to log are simply excluded from the market. The infrastructure that results is a permission system with authorities holding the clipboard. Peterson’s practical advice, install before you need it, use Stealth, route around blocks, sits in the gap this legislation is trying to close. Proton’s pitch is that a VPN that doesn’t log is the whole point of a VPN, and that circumvention tools will keep working whether or not a government licenses them. Turkey’s pitch is the opposite. Approved means logged. Unapproved means illegal. There is no third option being offered, which is usually the cue to ask why the option that protects users most is the one being removed. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Turkey to Ban Anonymous VPNs appeared first on Reclaim The Net.

The Opt-Out Button Is Decorative: A Guide to Hardening Your Browser
Favicon 
reclaimthenet.org

The Opt-Out Button Is Decorative: A Guide to Hardening Your Browser

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post The Opt-Out Button Is Decorative: A Guide to Hardening Your Browser appeared first on Reclaim The Net.

California Lawmakers Advance Bills to Impose AI Chatbot Censorship and Age Verification
Favicon 
reclaimthenet.org

California Lawmakers Advance Bills to Impose AI Chatbot Censorship and Age Verification

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. California Assembly Bill 2023 and Senate Bill 1119 would hand the state two new levers over AI chatbot platforms: mandatory age verification for every user, and a set of state-defined content rules that operators must program their products to follow. Lawmakers advanced both bills a few weeks after amending them on March 26. They are effectively the same bill filed in each chamber, and together they build on the age verification system California erected with its operating system age assurance law. If passed, the requirements take effect on July 1, 2027. Every operator of a “companion chatbot” would have to check ages through the Digital Age Assurance Act, the statute that routes age data through operating systems and real-time APIs. Once the platform knows you’re a minor, a separate set of rules kicks in. Conversation history must be deleted within 48 hours. Push notifications are banned between midnight and 6 a.m. and during school hours. Sessions are capped at one hour each, with a two-hour daily total. And the chatbot has to be engineered to avoid “excessively sycophantic” responses. The state has now written itself a statutory definition of flattery. Under both bills, “Excessively sycophantic” means sycophantic to an extent that is likely to have the substantial effect of subverting or impairing the user’s autonomy, decision-making, or choice. “Sycophantic” gets its own definition further down. California is reaching into the tone and personality of a conversational product and telling developers which registers of agreeableness are legal when a minor is on the other end. The age verification piece is what makes everything else possible. You cannot apply minor-specific speech rules unless you know who is a minor, and you cannot know who is a minor without identifying everyone. That is how age gates work. The practical effect of AB 2023 and SB 1119, if enacted, is that every Californian who wants to talk to a companion chatbot has to be age-assured first. The state’s existing OS-level age law does the identification. The chatbot bills connect the pipe. Lawmakers are framing this as child protection. “AI chatbots can be powerful tools for learning, but right now, millions of children are using them with no guardrails and no guarantee of safety.” That’s Assemblymember Rebecca Bauer-Kahan, one of the authors, in the March press release announcing the amended bills. Senator Steve Padilla, who carries SB 1119, said the legislation is about balancing safety and innovation while keeping California at the front of the regulatory pack. The child-protection frame is the one that consistently accompanies speech legislation, and it tends to do a lot of political lifting. Here, it’s being used to justify a structure that runs well beyond blocking sexual content or self-harm encouragement. The bills list specific categories of speech the chatbot must be designed to avoid producing for minors, including giving health advice, discouraging users from seeking outside help, and producing excessively sycophantic responses. Those are editorial decisions about the content and style of a product’s output, handed down by statute. There is also the question of what happens to adults. Age verification does not sort users into “minors, regulated” and “adults, left alone.” It sorts them into “verified” and “verified.” Once a platform has built the infrastructure to check every user’s age by default, that infrastructure exists for every user. Anonymous and pseudonymous use of AI tools becomes harder to maintain when the operating system is the one handing over age bracket data at the point of access. Session caps and notification blackouts are the quieter provisions, but they push in the same direction. They turn state regulators into product managers. Under the bills, it would be California law that a chatbot conversation is one hour long, that total daily use is two hours, and that the app can’t ping a teenager at 11:45 p.m. These are defensible parenting choices. They are unusual things to find in a penal code. Enforcement runs through a private right of action inherited from SB 243, the companion chatbot law Governor Newsom signed in October 2025. That earlier law already requires operators to disclose when a user is interacting with AI, to implement suicide and self-harm protocols, and to provide additional protections for known minors. SB 243 took effect on January 1, 2026. AB 2023 and SB 1119 layer on top of it. The bills are scheduled to move through committee over the coming months. The age verification and child safety requirements, if they make it to Newsom’s desk and survive his signature, would take effect July 1, 2027. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post California Lawmakers Advance Bills to Impose AI Chatbot Censorship and Age Verification appeared first on Reclaim The Net.