Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

X Challenges EU’s $140 Million Digital Services Act Fine in Court
Favicon 
reclaimthenet.org

X Challenges EU’s $140 Million Digital Services Act Fine in Court

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. X has filed a legal challenge against a $140 million fine the European Commission handed down in December, making it the first large company to contest the EU’s Digital Services Act (DSA) in court. The appeal, lodged at the EU’s General Court, argues X was denied due process and subjected to a biased enforcement process. This is a direct challenge to the Commission’s authority to define and punish disfavored speech at scale. The DSA is a controversial mechanism. Under it, Brussels can fine tech companies up to 6% of their global annual revenue for failing to remove content the Commission decides is “disinformation,” “illegal,” or otherwise problematic. Who decides what those categories mean? The Commission does. The same body that writes the definitions also runs the investigations and levies the fines. There is no external review and no independent adjudication before penalties land. X’s Global Government Affairs team didn’t soften its language: “This EU Decision resulted from an incomplete and superficial investigation, grave procedural errors, a tortured interpretation of the obligations under the DSA, and systematic breaches of rights of defense and basic due process requirements suggesting prosecutorial bias. X remains committed to user safety and transparency while defending our users’ access to the only global town square.” The law leans heavily on non-governmental organizations to advise regulators on what content may cross the line under EU standards, then places extensive reporting and compliance obligations on platforms to act on that advice. Third parties help set the bar, the Commission enforces it, and companies that push back risk fines large enough to alter business decisions. Alliance Defending Freedom International, which does a lot of good work, is supporting the legal challenge. Its senior European counsel, Adina Portaru, didn’t hedge: “X is where millions of people go to freely express their views. This is a crackdown on X by authorities who view a free speech platform as a serious threat to their total control of online narratives. By targeting X, they are targeting the free speech of individuals across the world who simply want to share ideas online free from censorship.” Portaru’s broader concern is the precedent. “If the Commission’s concentration of power goes unchallenged, it will further cement a highly problematic standard for speech control across the EU and beyond,” she said. The Trump administration and congressional Republicans have both pushed back against the DSA. Speaking in December while the fine was fresh, President Trump said, “Look, Europe has to be very careful. They’re doing a lot of things…Europe is going in some bad directions. It’s very bad for the people.” Last week, House Judiciary Chair Jim Jordan said his committee is looking at legislation that would protect American companies from penalties under foreign speech laws. The committee has already released documents that reveal the EU pressured tech companies to develop guidelines governing legal speech. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post X Challenges EU’s $140 Million Digital Services Act Fine in Court appeared first on Reclaim The Net.

FBI Wins Court Ruling to Keep Twitter Payments Secret
Favicon 
reclaimthenet.org

FBI Wins Court Ruling to Keep Twitter Payments Secret

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A federal judge has handed the FBI a win in its attempts to keep secrets. On February 4th, Chief Judge James Boasberg ruled that the bureau can keep secret the precise amounts it paid Twitter between 2016 and 2023 for complying with legal process requests. Judicial Watch, which had sued under the Freedom of Information Act, walked away empty-handed. We obtained a copy of the opinion for you here. You may remember our earlier reporting on how the FBI was paying Twitter. The payments totaled at least $3.4 million between October 2019 and February 2021 alone. That figure emerged from the Twitter Files released in December 2022. The FBI has never confirmed it. Neither has Twitter. And now, thanks to Boasberg’s ruling, the quarterly breakdown that would show exactly when the money flowed, and how much, stays buried. What were the payments for? Officially, reimbursements. Federal law requires agencies to compensate companies for the cost of responding to subpoenas, search warrants, and national security legal demands. The FBI was sending those requests to Twitter in volume. During the period leading up to the 2020 election, the FBI’s Elvis Chan and colleagues were holding weekly meetings with Twitter staff about “misinformation.” They were flagging accounts. They were flagging content. And they were being reimbursed for the legal paperwork that accompanied all of it. The Trump DOJ, through US Attorney Jeanine Pirro’s office, filed for summary judgment in December 2025, arguing the payment amounts are shielded by FOIA’s Exemption 7(E). That exemption covers law enforcement techniques and procedures whose disclosure could help criminals evade detection. Boasberg agreed, accepting the government’s argument that quarterly payment figures, combined with Twitter’s own transparency reports, could let bad actors reverse-engineer where the FBI is looking and where it isn’t. The logic is that if you know the FBI paid Twitter significantly more in Q4 2021 than Q3, you might infer the bureau ramped up surveillance following a specific event. A foreign intelligence service could check whether its operation triggered a spike. Criminals could compare the FBI’s Twitter payments with what it pays other platforms and migrate accordingly. The government’s declarations assert this. Boasberg deferred to them, as courts in national security cases routinely do. The mosaic theory the FBI invoked is real, and courts have repeatedly credited it. The problem isn’t the legal framework. The problem is what it conceals. The FBI was not simply investigating criminals during this period. It was meeting weekly with a private company’s content moderation team, flagging accounts of vaccine skeptics, lab-leak researchers, people who questioned the 2020 election, and journalists covering Hunter Biden. The $3.4 million in payments flowed through that same relationship. The legal-process reimbursements and the content-flagging meetings ran in parallel, conducted by the same FBI personnel, aimed at the same platform, during the same politically charged window before a presidential election. The quarterly payment breakdown requested would have shown, at a minimum, whether FBI engagement with Twitter spiked during electorally sensitive periods. It would have let the public cross-reference the payment timeline with known events: the weekly misinformation meetings, the account flagging, the suppression of the Hunter Biden laptop story. That is exactly the kind of accountability information FOIA exists to surface. Instead, Boasberg ruled that the numbers stay hidden because releasing them might tell a foreign spy whether the FBI noticed something. The court gave meaningful weight to the government’s national security declarations, as FOIA doctrine requires, and no weight at all to the public’s interest in understanding how the FBI was spending money on the platform it was simultaneously using as a content moderation partner. Judicial Watch had pointed out that Twitter’s semi-annual transparency reports already publish aggregate data on law enforcement requests. Boasberg acknowledged this but found that quarterly FBI-specific figures would add enough granularity to create a meaningful risk. The detail that remains secret is not how the FBI monitors threats online. Everyone knows that. What remains secret is the scale and timing of the FBI’s financial relationship with a platform it was also directing to censor Americans. The ruling does not find that the FBI acted properly during this period. It does not address the weekly misinformation meetings or the account flagging. It simply holds that the payment figures are protected law enforcement information, and that Judicial Watch gets nothing. Boasberg wrote that disclosure could “risk circumvention of the law.” The circumvention that went unexamined in his opinion is the one that may have already occurred. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post FBI Wins Court Ruling to Keep Twitter Payments Secret appeared first on Reclaim The Net.

Zuckerberg’s “Fix” for Child Safety Could End Anonymous Internet Access for Everyone
Favicon 
reclaimthenet.org

Zuckerberg’s “Fix” for Child Safety Could End Anonymous Internet Access for Everyone

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Mark Zuckerberg spent more than five hours on the stand in Los Angeles Superior Court on Wednesday, testifying before a jury for the first time about claims that Meta deliberately designed Instagram to addict children. The headline from most coverage was the spectacle: an annotated paper trail of internal emails, a 35-foot collage of the plaintiff’s Instagram posts unspooled across the courtroom, a CEO growing visibly agitated under cross-examination. The more important story is what Wednesday’s proceedings are being used to build. The trial is framed as a child safety case. What it is actually doing, especially through Zuckerberg’s own testimony, is laying the political and legal groundwork for mandatory identity verification across the internet. And Zuckerberg, rather than pushing back on that outcome, offered the court his preferred implementation plan. The “Addiction” Framing and What It Enables The lawsuit was filed by a plaintiff identified as KGM, now 20 years old, who claims she began using Instagram at age 9 and that the platform’s design addicted her to it, worsening her mental health, contributing to anxiety, body dysmorphia, and suicidal thoughts. TikTok and Snapchat settled before trial. Meta and Google’s YouTube remain defendants. Over 1,600 related cases are pending nationally. This is a big business. A verdict here could set the template for all of them. The case rests on a contested scientific premise: that social media is clinically addictive and that this addiction causes measurable harm. That premise drives the legal strategy, the media coverage, and the resulting policy agenda. It deserves scrutiny that most coverage is not giving it. The science is genuinely disputed, and we went into detail with that in a recent feature if you’re serious about understanding how these claims are created and weaponized. None of this means the harms alleged are fabricated. It means the word “addiction” is doing heavy rhetorical and legal work, and the policy consequences flowing from that word go far beyond anything a jury in Los Angeles will decide. “Addiction” is how you get a public health emergency. A public health emergency is how you get emergency powers and make it easier for people to overlook constitutional protections. Emergency powers applied to the internet mean mandatory access controls. And mandatory access controls on the internet mean the end of anonymous and pseudonymous speech. More: The Gospel of the Anxious Generation When social media is classified as a drug, access to it becomes a medical and regulatory matter. Who uses it, how, and under what conditions becomes a question for authorities rather than individuals. Regulating an addictive product and regulating speech look different on paper. The mechanisms required to enforce either look identical in practice: identity verification, access controls, and a surveillance architecture that follows users across every platform and device. The Section 230 Workaround The trial’s structure is worth examining separately. Section 230 of the 1996 Communications Decency Act has long shielded platforms from liability for what users post. Plaintiff’s lawyers here found a route around it: they argue that the platform itself is a defective product. The claim is not about user content but about design choices. Infinite scroll, auto-play, algorithmically amplified notifications, beauty filters linked to body dysmorphia. The lawsuit treats them like a car without brakes. A verdict for KGM would hand plaintiffs in 1,600 other cases a tested legal theory for stripping Section 230 protection from platform design decisions. That is a significant restructuring of internet liability law, driven by trial lawyers, using a mental health crisis whose causes are still actively debated in peer-reviewed journals. Zuckerberg was pressed with internal documents, including a 2015 estimate that 4 million users under 13 were on Instagram, roughly 30 percent of all American children aged 10 to 12. An old email from former public policy head Nick Clegg was read into the record: “The fact that we say we don’t allow under-13s on our platform, yet have no way of enforcing it, is just indefensible.” Zuckerberg acknowledged the slow progress: “I always wish that we could have gotten there sooner.” He also told the jury: “I don’t see why this is so complicated,” when pressed on the company’s age verification policies. His proposed answer to that question is the core problem. Zuckerberg’s Blueprint: Let Apple and Google Check Everyone’s ID Multiple times during his testimony, Zuckerberg argued that age verification should be handled not by individual apps but at the operating system level, by Apple and Google. He told jurors that operating system providers “were better positioned to implement age verification tools, since they control the software that runs most smartphones.” “Doing it at the level of the phone is just a lot cleaner than having every single app out there have to do this separately,” he said. He added that it “would be pretty easy for them” to implement. Note that. Zuckerberg is not proposing that Instagram verify the ages of Instagram users. He is proposing that Apple and Google verify the identity of every smartphone user, for every app, at the OS level. Once that infrastructure exists, it does not stay limited to social media. It applies to every app on the phone. Every website accessed through that phone’s browser. Every communication sent through any app on the device. This is more than age verification. It is a national digital ID layer baked into the two operating systems that run the overwhelming majority of the world’s smartphones. The proposal also solves Zuckerberg’s immediate legal problem. If Apple and Google own age enforcement, platforms like Meta are no longer responsible for enforcing it. The liability shifts. The company under lawsuit in Los Angeles deflects the core allegation by pointing at Cupertino and Mountain View. Who decides which apps require ID verification once this infrastructure exists? Apple and Google do. They would be deputized as identity gatekeepers for the internet. Two private companies, already under serious antitrust scrutiny for their control of app distribution, handed new authority over who accesses what online and under what identity. The Regulatory Architecture Already Under Construction Zuckerberg’s OS-level verification proposal fits neatly into a legislative agenda that was moving before he took the stand Wednesday. California’s SB 976, the Protecting Our Kids from Social Media Addiction Act, mandates age verification systems for social media platforms in the state. The California Attorney General must finalize implementation rules by January 2027. The Ninth Circuit has declined to rule on whether those requirements violate the First Amendment, saying it cannot assess the constitutional question until the regulations are finalized. Age verification for lawful online speech in California is advancing without a constitutional answer. The Kids Online Safety Act (KOSA), pending at the federal level, would direct agencies to develop age verification at the device or operating system level, the same framework Zuckerberg promoted from the stand. KOSA also carries broad definitions of “harmful” content that leave moderation decisions subject to government influence, with no independent review. Age verification and content restriction in a single bill, with the government writing the definition of harm. New York’s SAFE For Kids Act restricts algorithmic feeds for users who don’t complete age verification. Acceptable alternatives to submitting a government ID include facial analysis that estimates age. Biometric data, collected to scroll a social media feed. The infrastructure these laws require creates data that can be stolen, subpoenaed, and cross-referenced. A Discord breach last year exposed government-issued IDs submitted through the company’s age verification system, around 70,000 of them, with attackers claiming the number was higher. Every ID check database is a future breach waiting to happen. Anonymous and pseudonymous speech online has real value. Whistleblowers. Abuse survivors. Political dissidents in hostile environments. People exploring medical questions or identities they are not yet ready to attach their legal names to. Journalists protecting sources. Anyone whose safety depends on a separation between their online presence and their government identity. Mandatory identity verification at the OS level ends all of that for everyone. The stated goal is protecting 9-year-olds from Instagram. The mechanism ends anonymous internet access for every adult who owns a phone. Zuckerberg, under oath and under pressure, handed that mechanism a high-profile public endorsement. His lawyers will use it to deflect liability. Legislators will cite it in committee hearings. The Los Angeles trial will appear in bill summaries as evidence of urgent need. The word “addiction” started this chain. Public health emergency, emergency powers, age verification, OS-level ID checks. Each step follows from the last. Each step is presented as protecting children. The trial continues. KGM is expected to testify later in the proceedings. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Zuckerberg’s “Fix” for Child Safety Could End Anonymous Internet Access for Everyone appeared first on Reclaim The Net.

EU Defends Censorship Law While Commission Staff Shift to Auto-Deleting Signal Messages
Favicon 
reclaimthenet.org

EU Defends Censorship Law While Commission Staff Shift to Auto-Deleting Signal Messages

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A senior European Union official responsible for enforcing online speech rules is objecting to what he describes as intimidation by Washington, even as his own agency advances policies that expand state involvement in digital expression and private communications. Speaking Monday at the University of Amsterdam, Prabhat Agarwal, who leads enforcement of the Digital Services Act at the European Commission, urged regulators and civil society groups not to retreat under pressure from the United States. His remarks followed the February 3 release of a report by the US House Judiciary Committee that included the names and email addresses of staff involved in enforcing and promoting Europe’s censorship laws. “Don’t let yourself be scared. We at the Commission stand by the European civil society organizations that have been threatened, and we stand by our teams as well,” Agarwal said, as reported by Politico. The report’s publication came shortly after Washington barred a former senior EU official and two civil society representatives from entering the United States. European officials interpreted those moves as an effort to deter implementation of the DSA, the bloc’s flagship content regulation framework governing large online platforms. The DSA establishes compliance obligations for major technology companies. Enforcement decisions, including a recent massive fine against X, depend on investigations by Commission staff and documentation submitted by outside organizations. Using its own logic, Brussels maintains that this regulatory structure ultimately protects freedom of expression by reducing manipulation and abuse. The White House and members of Congress take a different view, arguing that the DSA creates formal channels for governments to pressure platforms to remove lawful speech. Public figures such as Elon Musk have characterized the regime as institutionalized censorship. Agarwal described his team’s work as facing growing resistance. “Our work” is “more difficult, more adversarial” than anticipated, he said. The broader dispute with Washington, he added, is “much bigger than the DSA itself,” explaining that “It has to do with the intellectual space that we [as Europeans] occupy.” Europe, he continued, must “defend a space in which we can actually debate things that are important for our society.” Colleagues, he said, have shifted internal communications to Signal, using encrypted messages set to disappear automatically, with the “auto-delete timings getting shorter.” That’s particularly interesting because the same Commission that is directing platforms to police speech and comply with extensive transparency reporting obligations is now relying more heavily on ephemeral messaging tools for its own internal discussions. Public officials operating under European transparency and access to documents rules are generally expected to conduct official business in ways that preserve records for potential freedom of information requests. Whether auto-deleting messages satisfy those obligations remains an open legal question. The European Commission’s leadership circulated an internal email, later seen by reporters, assuring staff whose names appeared in the congressional report that the institution would protect them from threats. Yet Agarwal did not address the Judiciary Committee’s central allegation that EU authorities have pressed US companies to moderate speech originating in the United States. The controversy unfolds alongside other EU initiatives with significant privacy implications. The DSA includes age verification and risk mitigation requirements that can require platforms to collect additional user data. Separately, Brussels is pursuing an expansion of so-called Chat Control measures, building on a 2021 temporary derogation from the ePrivacy Directive that permitted providers to voluntarily scan communications. That earlier measure did not mandate breaking end-to-end encryption, but proposals to broaden monitoring authority have generated concern among digital rights advocates who view them as steps toward routine scanning of private communications. It’s interesting that the Commission’s leadership is hiding its communications by relying on the same technology that it is otherwise seeking to destroy. Transparency debates are not new within the EU institutions. Commission President Ursula von der Leyen has previously faced allegations related to deleted messages in the context of high-level negotiations, reinforcing longstanding disputes about record-keeping standards at the top of the EU executive. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post EU Defends Censorship Law While Commission Staff Shift to Auto-Deleting Signal Messages appeared first on Reclaim The Net.

Friedrich Merz’s Push to End Online Anonymity Has a Troubling Subtext
Favicon 
reclaimthenet.org

Friedrich Merz’s Push to End Online Anonymity Has a Troubling Subtext

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. German Chancellor Friedrich Merz wants to end online anonymity. Speaking Wednesday evening at an event held by his conservative Christian Democrats in Trier, he called for mandatory real names across social media and floated a potential ban on platforms for users under 16. “I want to see real names on the internet. I want to know who is speaking,” Merz said. The framing is the same as usual; protect democracy, protect children. What Merz left out is worth examining closely. Germany’s criminal code is already a problem. Sections 185 through 187 criminalize insults, malicious gossip, and defamation against ordinary citizens. Those carry fines or prison sentences capped at two years for insults and malicious gossip, five years for defamation. Section 188 covers the same offenses when directed at politicians. The penalties are steeper across the board: three years maximum for insults, mandatory prison time with a five-year ceiling for malicious gossip (minimum three months), mandatory prison time with a six-month floor and five-year ceiling for defamation. No fine option. Politicians use these laws. Merz uses these laws. He has filed hundreds of complaints himself. CDU politicians and others flag thousands of posts to prosecutors annually, and German police conduct hundreds of raids each year for insults and alleged “hate speech.” The infrastructure for going after ordinary citizens who criticize their representatives already exists and is already in active use. What a real name mandate does is remove the last barrier between a critical post and a knock on the door. Right now, authorities have to work to identify anonymous speakers. With real names required by platform policy, that step disappears. Merz framed his position as symmetry. “In politics, we engage in debates in our society using our real names and without visors. I expect the same from everyone else who critically examines our country and our society.” But politicians operate with institutional resources, legal teams, and parliamentary protections. A citizen posting a pointed criticism of a public official from their personal account has none of that. They do have something, for now: the option to do it without their name attached. Merz wants to take that away. He also criticized those who defend anonymity, saying they are “often people who, from the shadows of anonymity, demand the greatest possible transparency from others.” The characterization treats pseudonymous speech as inherently suspicious, which is one way to frame it. Another is that people have historically needed cover to say true things about powerful people without facing retaliation. Merz warned that “enemies of our freedom, enemies of our democracy, enemies of an open and liberal society” were using algorithms and AI to run targeted influence campaigns, and that he had underestimated how effectively these tools could manipulate public opinion. Merz asked: “Do we want to allow our society to be undermined in this way from within and our youth and children to be endangered in this way?” It’s a pointed question. A more uncomfortable one: do we want to hand politicians whose parties already file mass complaints under insult laws a system that automatically links every critical post to a verified identity? If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Friedrich Merz’s Push to End Online Anonymity Has a Troubling Subtext appeared first on Reclaim The Net.