Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Meta Will Fight Ofcom Over the Math, Just Not the Censorship
Favicon 
reclaimthenet.org

Meta Will Fight Ofcom Over the Math, Just Not the Censorship

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Meta filed a judicial review against Ofcom in London’s High Court on Thursday over how the regulator calculates fees and fines under the Online Safety Act, the UK’s online censorship law. The company isn’t challenging the law’s censorship powers, its ability to compel scanning of encrypted messages, or its elastic definition of online “harm.” It is challenging the size of the fine. The dispute centers on whether Ofcom should base penalties on Meta’s global revenue or just what it earns in the UK and the gap between those two figures is enormous. Meta reported roughly $201 billion in worldwide revenue last year, and the Online Safety Act lets Ofcom fine companies up to 10% of “qualifying worldwide revenue,” which puts Meta’s theoretical penalty ceiling near $20 billion. Calculated on UK-only revenue, that number collapses. “We and others in the tech industry believe [Ofcom’s] decisions on the methodology to calculate fees and potential fines are disproportionate,” a Meta spokesperson said. “We believe fees and penalties should be based on the services being regulated in the countries they’re being regulated in. This would still allow Ofcom to impose the largest fines in UK corporate history.” Ofcom pushed back in a statement, saying: “Disappointingly, Meta are objecting to the payment of fees, and any penalties that could be levied on companies in future, that are calculated on this basis.” Trade body CCIA and Epic Games may also seek to intervene, with Matthew Sinclair, CCIA UK Senior Director, saying in a statement: “CCIA supports Meta’s challenge and intends to apply to intervene in order to assist the court in understanding the wider potential impact on the sector.” Ofcom plans to send the first invoices in September and a full hearing is expected in October. What makes Meta’s challenge so revealing is what it concedes. As far as we can tell, the company accepts the Online Safety Act and its power to curb American speech, despite the First Amendment. It accepts Ofcom’s authority to regulate speech, to investigate platforms, and to punish them. It’s asking for a smaller bill, not a different regime. Compare that to what the smallest platforms are doing. 4chan and Kiwi Farms, two US-based forums with no offices, employees, or servers in the UK, have refused to comply with the Online Safety Act entirely. When Ofcom fined 4chan £520,000 for failing to implement age checks and conduct risk assessments, the platform’s lawyer, Preston Byrne, responded by posting an AI-generated image of a hamster. “In the only country in which 4chan operates, the United States, it is breaking no law and indeed its conduct is expressly protected by the First Amendment,” Byrne wrote. 4chan has not paid a penny of its accumulated fines and both it and Kiwi Farms filed a lawsuit against Ofcom in US federal court in August 2025, arguing that the regulator’s enforcement demands amount to unconstitutional foreign censorship on American soil. Their lawyers called Ofcom’s actions “egregious violations of Americans’ civil rights” and pointed out that every enforcement demand was sent by email, bypassing the UK-US Mutual Legal Assistance Treaty entirely. A FOI request revealed that Ofcom had issued 197 Section 100 notices to US-based companies as of February 2026, all without using MLAT. Only four American companies, 4chan, Kiwi Farms, Gab, and a mental health forum called SaSu, publicly refused to comply. Byrne called the 197 notices a “breathtaking” “attack on the First Amendment” and noted that Ofcom appeared to be enjoying a 98% compliance rate with demands that he argues have no legal force in the United States. The vast majority of American companies that received Ofcom’s demands appear to have quietly done what they were told. The contrast between these approaches could not be sharper. Smaller platforms with limited resources are challenging the Online Safety Act’s legitimacy in federal court, fighting over whether a British regulator has any right to dictate what speech is allowed on American servers. Meta, a company that earned $201 billion last year and has spent years battling EU regulators through every available legal channel, does not challenge the entire censorship framework and is only negotiating over the accounting method used to calculate its contribution. The UK government has built a system where companies pay for the privilege of being censored, with the size of the payment scaled to how much money they make from publishing the speech that Ofcom regulates. Meta is the company that could most easily challenge the Online Safety Act’s surveillance and censorship powers, including the ability to compel scanning of encrypted messages that Signal has said would drive it out of the UK entirely. Yet, it is asking the High Court to please use a different revenue column when working out the fee. The censorship apparatus the Online Safety Act built, and the speech tax that funds it, remains unchallenged by the very companies with the deepest pockets and the most to lose. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Meta Will Fight Ofcom Over the Math, Just Not the Censorship appeared first on Reclaim The Net.

European Commission Official Touts 17 Investigations as Proof the Digital Services Act “Delivers”
Favicon 
reclaimthenet.org

European Commission Official Touts 17 Investigations as Proof the Digital Services Act “Delivers”

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The European Union’s Digital Services Act is a censorship and surveillance law dressed in the language of safety. It gives unelected officials in Brussels the power to decide what hundreds of millions of people are allowed to say online and it is building the infrastructure to verify their identities before they’re permitted to say it. But at POLITICO’s AI & Tech Week summit in Brussels this month, Renate Nikolay, the European Commission’s Deputy Director-General at DG CONNECT, celebrated the law’s growing enforcement record. Seventeen ongoing investigations and one non-compliance decision, she told the audience, prove the DSA “delivers.” What the DSA delivers is pressure. Pressure on platforms to censor more speech, faster, with fewer questions asked. Pressure to open their algorithms and internal systems to government inspection without a court order. And, increasingly, pressure on individual users to prove who they are before they’re allowed to participate in public discourse online. Nikolay presented these enforcement numbers as proof of success. They are proof of something but not what she thinks. The DSA’s censorship powers are extensive and largely unchecked. The law requires platforms with more than 45 million monthly EU users to assess and mitigate “systemic risks,” a category that includes risks to “civic discourse,” “electoral processes,” and “public security.” The Commission decides what counts as a systemic risk. The Commission decides whether a platform’s response is sufficient. And when the Commission decides it isn’t, the Commission opens an investigation, gathers evidence, issues preliminary findings, and imposes fines of up to 6% of global annual revenue. There is no independent prosecutor. No separation between the body that writes the rules and the body that punishes violations. The Commission is a regulator, investigator, and judge. The law also empowers the Commission to order “interim measures” while investigations are still underway, forcing platforms to change how they operate before anyone has established that they did anything wrong. It can demand access to platform algorithms, require changes to recommender systems, and order increased monitoring of specific keywords or hashtags. At the same summit, the Commission’s Martin Harris-Hess declared that “2026 is the year of enforcement.” He explained that “when the DSA came to enforce, we had to build capacity, we had to build experience, we had to build understanding of how the platforms work.” The Commission now has 127 staff working on DSA enforcement, is hiring 60 more, and has launched investigations covering X, TikTok, Meta’s Facebook and Instagram, AliExpress, Temu, Snapchat, and several pornographic platforms. The building phase is over. The single completed enforcement action, a €120 million fine against X in December 2025, targeted the platform’s blue checkmark system, its advertising repository, and researcher access to data. X has appealed the fine to the General Court of the European Union, arguing prosecutorial bias and due process violations. The politically dangerous investigation, the one that goes to the heart of what the DSA actually is, remains open. That probe, launched against X in December 2023, examines the platform’s handling of “illegal content” and “information manipulation.” Neither term has a fixed legal definition under the DSA. The Commission gets to interpret both. “Information manipulation” could mean a coordinated bot campaign. It could also mean a viral post that the Commission finds politically inconvenient. The law does not distinguish between the two because the people who wrote it did not want it to. “Systemic risk” is the DSA’s most powerful and most dangerous concept. Platforms must assess risks to “civic discourse” and then mitigate them. The effect is now global, not just European. The US House Judiciary Committee published reports documenting how the Commission used the DSA and earlier informal pressure campaigns to force platforms into changing their worldwide content moderation rules. Subpoenaed documents showed TikTok rewriting its global community guidelines specifically to “achieve compliance with the Digital Services Act.” The new rules censor “marginalizing speech,” “coded statements” that “normalize inequitable treatment,” and “misinformation that undermines public trust.” These categories are so vague that almost any political statement could trigger them. And because TikTok applies its guidelines globally, a censorship regime designed in Brussels now determines what users in São Paulo, Lagos, and Los Angeles can post. The privacy dimension of the DSA is at least as alarming as the censorship dimension, and the two are converging. The Commission is now pushing age verification requirements under the DSA that would require platforms to collect identity data from users before granting access to certain content. Nikolay herself and enforcement chief Prabhat Agarwal recently held a press conference explaining plans to use verification systems linked to the EU Digital Identity Wallet, a digital ID that EU countries are expected to implement by the end of 2026. The wallet would let users manage their identity, educational qualifications, drivers licenses, and other personal attributes from a single app. Five member states are already testing the system. At the summit, Harris-Hess previewed this trajectory when discussing potential social media bans for minors. He said a ban is “legally” possible but cautioned that “ban is not the right word” because the term is “emotionally laden.” He preferred “age-related restrictions to accessing certain services.” The Commission has developed an entire vocabulary for softening what its powers actually do. Censorship becomes “content moderation.” Surveillance becomes “verification.” A ban becomes “age-related restrictions.” Government control of speech becomes “platform accountability.” The phrases obscure the same underlying reality: the state is deciding who gets to speak, what they’re allowed to say, and whether they must identify themselves before saying it. Flemish Minister for Brussels and Media Cieltje Van Achter offered the summit a rare moment of candor. The EU is taking steps to enforce the DSA, she said, but “we’re not seeing the result yet on the social media platform.” What she wants is “safe for social media in real life, in practice.” She also noted that if existing age thresholds can’t be enforced, raising them accomplishes nothing. The observation cuts deeper than she may have intended. The Commission is building a massive regulatory apparatus, hiring hundreds of enforcement staff, launching investigation after investigation, and the politicians who wanted the law admit they can’t see the difference it’s making. The enforcement machine is growing but the problem it claims to solve remains unsolved. That is the pattern with censorship regimes. The apparatus always expands. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post European Commission Official Touts 17 Investigations as Proof the Digital Services Act “Delivers” appeared first on Reclaim The Net.

Google Broke reCAPTCHA for De-Googled Android Users
Favicon 
reclaimthenet.org

Google Broke reCAPTCHA for De-Googled Android Users

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Google has tied its next-generation reCAPTCHA system to Google Play Services on Android, meaning anyone running a de-Googled phone will automatically fail verification when the system decides to challenge them. The requirement forces Android users to run Google’s proprietary app framework version 25.41.30 or higher just to prove they’re human. When reCAPTCHA flags what it considers suspicious activity, it abandons the old image puzzles and demands you scan a QR code. That scan requires Play Services running in the background, communicating with Google’s servers. If you’re using GrapheneOS or any other custom ROM that strips out Google’s software, the verification fails. Google announced the broader system, Google Cloud Fraud Defense, at Cloud Next on April 23, pitching it as a trust platform designed to handle autonomous AI agents and traditional bots alike. What Google didn’t emphasize was the part where proving you’re human now requires submitting to its proprietary surveillance. This wasn’t sudden, either. An Internet Archive snapshot from October 2025 shows the same support page already listing a Play Services requirement at version 25.39.30. Google built this dependency quietly for at least seven months before a Reddit user on the degoogle subreddit flagged it, with reporting from PiunikaWeb and Android Authority bringing wider attention. The iOS comparison is revealing because Apple devices running iOS 16.4 or later complete the same verification without installing any additional apps. Google didn’t demand iPhone users install Google software to pass the test. Only Android users who refuse Play Services get locked out. The asymmetry reveals what this is really about: not security, but ecosystem control. reCAPTCHA sits in front of millions of websites. When Google ties verification to Play Services, it establishes a precedent where accessing basic web content requires running Google’s software and transmitting data to Google’s servers. People running de-Googled phones chose those setups because they read the data practices, understood what Play Services phones home about, and decided they didn’t consent. Google’s new system punishes that decision by treating the absence of its proprietary software as suspicious by default. Web developers adopting this reCAPTCHA should understand what they’re choosing. Every site that implements it tells de-Googled Android users they’re not welcome. That’s a small audience today. It’s also the audience most likely to care about how a website treats their data, and the least likely to capitulate. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Google Broke reCAPTCHA for De-Googled Android Users appeared first on Reclaim The Net.

Paris Prosecutors Move to Criminally Charge Musk and xAI
Favicon 
reclaimthenet.org

Paris Prosecutors Move to Criminally Charge Musk and xAI

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Paris prosecutors announced Thursday that their investigation into Elon Musk’s social platform X has been upgraded to a full criminal probe. The Paris prosecutor’s office is now asking investigating magistrates to formally charge Musk, former X CEO Linda Yaccarino, and three companies linked to the platform, including xAI and X.AI Holdings Corp. If they refuse to appear for those charges, prosecutors say judges can issue warrants that carry the same legal weight. The charges cover a long and growing list of alleged offenses: Complicity in possessing and distributing sexual images. Nonconsensual sexually explicit deepfakes. Denial of crimes against humanity. Fraudulent extraction of user data. Violation of the secrecy of electronic correspondence. Manipulation of an automated data processing system as part of an organized group. Illegal collection of personal data without adequate security. The announcement came just three weeks after the US Department of Justice refused to cooperate with the French investigation, calling it an attempt to regulate American speech through foreign criminal law. France pushed ahead anyway. A speech case wearing a criminal costume The investigation did not begin with deepfakes or child safety. It began with politics. French Member of Parliament Éric Bothorel, a member of President Macron’s centrist Renaissance party, filed a complaint in 2025 alleging that X’s algorithm had been manipulated for the purpose of “foreign interference” in French politics. Bothorel accused the platform of narrowing “diversity of voices and options” after Musk’s takeover and cited Musk’s “personal interventions” in moderation decisions. A second complaint, from a senior official in French public administration, alleged the same thing, claiming to observe a surge of “hateful, racist, anti-LGBTQ” content aimed at skewing democratic debate. The theory of the case converts an editorial choice into a crime. Every platform’s algorithm is an editorial product. It decides what content gets amplified and what gets buried. When a government prosecutes a platform owner because it doesn’t like how that algorithm ranks political speech, it is asserting the power to dictate how information reaches the public. That is censorship by prosecution. By July 2025, prosecutors wanted access to the algorithm itself to examine it for “bias.” X refused. The company called the probe “politically motivated” and said it would not comply with demands to hand over its recommendation system for state inspection. Then the investigation expanded and the charges got heavier. How serious charges get stacked onto a political case In November 2025, Grok, the AI chatbot built by xAI and integrated into X, generated French-language posts questioning the use of gas chambers at Auschwitz-Birkenau. The Auschwitz Memorial condemned the output. X deleted the post. Grok attributed the error to a programming mistake. Holocaust denial is a criminal offense in France. In late December 2025 and early January 2026, Grok’s image generation capabilities were widely abused by users to create nonconsensual images of women in bikinis. xAI restricted image generation to paid subscribers on January 9 and said it had blocked nudification capabilities by January 14. Prosecutors added these allegations to the existing investigation. This is how speech prosecutions work in modern Europe. You start with an accusation about algorithms and political content. You add serious criminal charges later. The original political motive gets buried under the weight of the new allegations, and anyone who questions the prosecution can be accused of defending child exploitation. The charges provide cover. The algorithm complaint provides the engine. The prosecutors’ own statement from February described the investigation as having “the objective of ultimately ensuring the compliance of the X platform with French law.” That’s compliance with the state’s vision of how a platform should operate. The raid, the no-show, and the DOJ On February 3, 2026, the Paris prosecutor’s cybercrime division raided X’s offices in Paris alongside French national police and Europol. X called the raid “an abusive act of law enforcement theater designed to achieve illegitimate political objectives.” Musk called it “a political attack.” Both Musk and Yaccarino were summoned for “voluntary interviews” on April 20. Neither appeared. Under French law, prosecutors can issue arrest warrants for suspects who skip voluntary interviews, which makes the word “voluntary” carry less meaning than advertised. Two days before those interviews were scheduled, the US Department of Justice sent French law enforcement a two-page letter refusing to help. “This investigation seeks to use the criminal legal system in France to regulate a public square for the free expression of ideas and opinions in a manner contrary to the First Amendment of the United States Constitution,” the letter stated. The DOJ added that France’s three requests for assistance in 2026 “constitute an effort to entangle the United States in a politically charged criminal proceeding aimed at wrongfully regulating through prosecution the business activities of a social media platform.” An xAI official responded publicly. “We are grateful to the Justice Department for rejecting this effort by a prosecutor in Paris to compel our CEO and several employees to sit for interviews,” the official told the Wall Street Journal. “We hope the Parisian authorities will now come to their senses, recognize that there is no wrongdoing here, and terminate their baseless investigation.” The DOJ’s letter puts American mutual-assistance treaties off the table for European speech prosecutions. France was the first to find out. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Paris Prosecutors Move to Criminally Charge Musk and xAI appeared first on Reclaim The Net.

The FCC Wants Your ID Before You Get a Phone Number
Favicon 
reclaimthenet.org

The FCC Wants Your ID Before You Get a Phone Number

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The era of the anonymous phone number could be ending. On April 30, the Federal Communications Commission unanimously approved a proposal requiring telecom providers to verify customers’ identities before activating service. Government-issued ID, physical address, legal name, and existing phone numbers would all be included. The stated goal is stopping robocalls. The result would be an identity-verification regime covering one of the last semi-anonymous communication tools available to ordinary Americans. The proposal applies to nearly every voice provider in the country, from traditional carriers and mobile operators to VoIP services. The FCC is seeking public comment on specifics, but the direction is clear. FCC Chairman Brendan Carr framed it around negligent carriers. “As we have continued to investigate the problem of illegal robocalls over the last year, it has become clear that some originating providers are not doing enough to vet their customers, allowing bad actors to infiltrate our U.S. phone networks,” he said. Some providers, he added, “do the bare minimum (or worse) and have become complicit in illegal robocalling schemes.” That language targets telecom companies and the surveillance targets everyone else. The framework borrows from banking’s anti-money-laundering rules. The FCC is also asking whether carriers should retain identity documentation for at least four years after a customer leaves and whether they should check customers against law enforcement watchlists. Penalties would shift to a per-call basis, meaning fines of $1,000 to $15,000 for every illegal call a poorly verified customer places. The real privacy stakes sit in the proposal’s section on prepaid service. Right now, you can pay cash for a prepaid phone and SIM card without showing identification. Journalists use prepaid phones to protect sources, domestic violence survivors use them to avoid being traced, and whistleblowers, activists, or anyone with a reason to separate phone activity from legal identity relies on this. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post The FCC Wants Your ID Before You Get a Phone Number appeared first on Reclaim The Net.