Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Federal Judge Blocks Arkansas Social Media Law on First Amendment Grounds
Favicon 
reclaimthenet.org

Federal Judge Blocks Arkansas Social Media Law on First Amendment Grounds

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A federal judge blocked Arkansas Act 900 today, one day before the law was set to take effect, handing the state its second courtroom defeat in the same fight over who gets to decide what people can see and say online. We obtained a copy of the order for you here. US District Judge Timothy L. Brooks granted NetChoice’s motion for a preliminary injunction, freezing enforcement of a statute that would have imposed strict liability on social media platforms for a growing list of “addictive practices,” forced default settings on anyone in Arkansas the platform couldn’t verify as an adult, and required platforms to build parental dashboards tracking minors who don’t even have accounts. The ruling came in the Western District of Arkansas, Fayetteville Division. The First Amendment problem is obvious. The government wrote a law that restricts what platforms can say, who they can say it to, and when. It restricts what minors can see and post. Then it backed those restrictions with $10,000-per-day fines and rules so vague that platforms cannot tell in advance what will trigger liability. Each of those features is a constitutional problem on its own. Act 900 combined all of them. Act 900 was Arkansas’s second try. The first, Act 689 of 2023, was permanently enjoined by the same court last year on First Amendment and vagueness grounds. That appeal is still pending before the Eighth Circuit. Rather than wait for the appellate ruling, the Arkansas General Assembly passed Act 900 to patch the definitional problems and layer on new obligations. Judge Brooks found the new version suffers from the same constitutional defects, and in some places, worse ones. “Addictive features” is the new framing for the old project The language has changed in the last two years. The first wave of state social media laws talked openly about content. Legislatures tried to regulate “harmful” posts, “misinformation,” and categories of speech they wanted gone. Courts kept striking those laws down. The speech was protected, the definitions were vague, and the state’s role in deciding what counted as harmful was obviously the problem. The new framing is “addictive design.” The theory goes like this: The government is not regulating speech. It is regulating the features that deliver speech. That includes notifications, recommendations, infinite scroll, algorithmic feeds, and the little hit of validation when a post gets likes. The argument is that these are engineering choices, not editorial ones, so the First Amendment is not really in play. This is a convenient reframing. It lets legislators tell the public they are addressing child safety while avoiding the cases that blocked the old laws. It lets them avoid the word “censorship” while building tools that do the same work. Notifications carry speech. Recommendations are editorial decisions about what content a user sees. Algorithmic feeds are the platform’s curation of protected expression. Regulating those features is regulating how platforms speak and how users receive that speech, and the Supreme Court said so directly in Moody v. NetChoice. Judge Brooks saw the move for what it is. The opinion identifies Act 900’s addictive practices rule as a content-neutral pretext that collapses on contact with the actual text. The statute does not define “addictive.” It does not define “compulsive behaviors.” It lists notifications, recommended content, and “artificial sense of accomplishment” as examples, which covers virtually every design choice any modern website makes. A platform is liable if a single minor develops compulsive behavior in response to anything the platform does, on or off the platform, whether the company could have foreseen it or not. It is a license for the state to decide which design choices, and therefore which speech-delivery mechanisms, it will permit. A regulator who doesn’t like recommendation algorithms can call them addictive. A regulator who doesn’t like notifications about political content can call them addictive. The word does whatever the enforcer needs it to do. Arkansas’s own example provisions make the point. “Artificial sense of accomplishment” is not a legal term. It’s a mood. The court’s language on this is sharp. The provision “fails to specify a standard of conduct to which platforms must conform[,] and its violation entirely depends upon the sensitivities of some unspecified user.” That is the design. The enforcer decides which user, which sensitivity, and which feature triggered the unacceptable response. The platform learns the rule by being punished for breaking it. In the meantime, every platform has to guess which features to strip out, which speech to throttle, and which audiences to wall off. The safest move is to do less, show less, recommend less, reach fewer people. That is a speech outcome, produced by a statute the state insists is not about speech. The “addiction” frame also picks winners. Platforms that already serve mostly adults, or already have the infrastructure to age-verify and surveil, can absorb the compliance cost. Smaller platforms cannot. Nextdoor told the court it would block every Arkansan under 16 rather than try to comply, as it already has in Texas, Mississippi, and Tennessee. The result is a narrower internet, with fewer voices, and a regulatory structure that favors the largest incumbents, achieved through a law that claims to be protecting children from those same incumbents. The “default” provisions fail for a different constitutional reason. One required platforms to silence non-safety notifications for Arkansas minors between 10 p.m. and 6 a.m. The other mandated the most restrictive privacy and safety settings available as the default for minor accounts. Both provisions burden speech. Platforms communicate through notifications. Privacy settings control who can see whose posts and whose posts a user can see, which the Supreme Court has long recognized as speech within the First Amendment’s protection. The government cannot impose those kinds of restrictions unless the law is narrowly tailored to a significant government interest. Act 900 flunks the tailoring analysis on both provisions. The nighttime notification rule would effectively silence platform notifications for every Arkansas user the platform cannot confirm is an adult account holder, a third of the day. Parents are free to override the setting anyway. If parents wanted their children to sleep, Judge Brooks noted, they could take the phones away. The state offered no evidence that parents lack the ability to do so. The law silences speech without advancing the interest the state invoked to justify it. The privacy default is worse. Anyone can change it, including the minor the law is supposedly protecting. The court called it “wildly underinclusive,” because the statute “in effect, allows children to decide whether they need protection from sexual exploitation online because they are free to depart from the protective default.” The provision burdens platforms’ speech across the board while accomplishing nothing for the children it claims to shield. The court’s conclusion on the defaults is the line Taske quoted in his statement. “Imposing small burdens on vast quantities of speech for no appreciable benefit is not consistent with the First Amendment. Arkansas cannot sentence speech on the internet to death by a thousand cuts.” The compelled speech problem: forced surveillance dressed as disclosure The dashboard provision produced the strangest result in the opinion. Because Act 900 defines a “user” as someone who views content but isn’t an account holder, the requirement that platforms build a parental monitoring dashboard for “minor users” would force platforms to identify every minor who visits the site, collect identifying data, locate their parents, and track usage across devices. Compelled speech ordinarily triggers strict scrutiny. Arkansas asked the court to apply the more lenient Zauderer standard for mandated commercial disclosures. The court didn’t need to resolve the dispute because, even under the easier test, the provision fails. Forcing platforms to compile surveillance infrastructure on every minor visitor, identify each one’s parents, and enforce parental restrictions across devices is unduly burdensome by any measure. The court found it “likely to chill platforms’ dissemination of speech to or from anyone who is not an account holder.” Judge Brooks quoted Packingham on the central constitutional point: “the government ‘may not suppress lawful speech as the means to suppress unlawful speech.'” The Supreme Court struck down a North Carolina law in that case for barring registered sex offenders from accessing commonplace social media sites. Act 900 isn’t as broad, but it operates on the same logic. Protect children from online predators by restricting the speech of everyone the state classifies as a child, and by extension, the platforms’ ability to speak to them. NetChoice and the state’s response NetChoice’s lead counsel on the case framed the ruling bluntly. “Once again, the District Court hit the nail on the head. Left to its own devices, Arkansas would ‘sentence speech on the internet to death by a thousand cuts.'” NetChoice Litigation Center Co-Director Paul Taske said in a statement. He added that “Act 900 is deeply flawed. It burdens speech without providing any upside.” The legal stakes extend beyond Arkansas. The court’s vagueness analysis tracks a recent Ninth Circuit decision, NetChoice v. Bonta, which reached similar conclusions about a California law using similar language. Write the rule vaguely enough that platforms cannot know what compliance looks like, impose strict liability with daily penalties, invoke child safety to justify the structure, and let the enforcement discretion do the rest. The result is that platforms either surveil their users to comply, restrict access to avoid the risk, or self-censor to stay inside whatever the enforcing authority decides the vague terms mean this month. Each option damages speech. The First Amendment treats laws built this way as constitutionally unserious, regardless of how the sponsoring legislature frames them. On irreparable harm, the court applied the Eighth Circuit standard and found it met. “The loss of First Amendment freedoms, for even minimal periods of time, unquestionably constitutes irreparable injury.” The state’s interest in enforcing its statute does not outweigh the public’s interest in not having speech silenced under an unconstitutional law. Arkansas has already filed interlocutory appeals on both the Act 689 permanent injunction and the Act 901 preliminary injunction. A third appeal is likely. For now, Act 900 does not go into effect tomorrow. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Federal Judge Blocks Arkansas Social Media Law on First Amendment Grounds appeared first on Reclaim The Net.

Sam Altman’s World ID Expands Biometric Identity Checks
Favicon 
reclaimthenet.org

Sam Altman’s World ID Expands Biometric Identity Checks

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A biometric identity system built on iris scans is expanding into mainstream online services while its backers outline new ways to tie verified identity to revenue generation. The initiative, led by OpenAI CEO Sam Altman, introduced its standalone World ID app in public beta on April 17. The app separates identity management from the existing World App crypto wallet and is described as a tool to “verify with platforms and services, manage your authenticators, store credentials and control how your World ID is used.” The rollout comes as the organization reports more than 18 million people across 160 countries have already been verified using its Orb devices, which scan a person’s iris to create a unique identifier. Deployment of Orb devices is increasing, with additional coverage planned across New York, Los Angeles, and San Francisco. An “Orb-on-demand” service is also being introduced, allowing individuals to schedule iris scans at locations of their choosing. This approach extends biometric collection into more varied settings. Greater accessibility may encourage uptake, though it also increases the number of environments where highly sensitive biological data is captured. At a recent event, the organization described its broader ambition as embedding its verification tech across the internet, stating the goal is to get its “proof-of-human” system into “every website and app” on the open internet. A wider push toward digital ID checks The expansion aligns with a broader movement across the tech sector toward routine identity verification. Platforms are introducing checks framed around safety, fraud prevention, and authenticity, gradually normalizing the idea that access to services may require proof of identity rather than anonymous or pseudonymous participation. More: The Age Verification Con World’s model places biometric verification at the forefront of this trend. By tying a persistent identifier to a person’s physical characteristics, the system enables repeated checks across different services without requiring separate verification processes each time. This creates a form of continuity across platforms. While presented as a way to reduce bots and misuse, it also consolidates identity into a reusable credential that can follow individuals across contexts, limiting the ability to compartmentalize online activity. Revenue model tied to identity verification The company’s financial framing links this identity layer directly to monetization. According to its own materials, World ID could increase average revenue per user by improving trust signals and conversion rates. A central proposal involves offering a “verified human” tier to advertisers, with higher pricing based on confirmed human impressions. The company states that “advertisers whose conversions come from verified humans can better measure their marketing ROI, which justifies sustained or increased spend,” and that “an ad network that can prove its impressions reach real people will command the budgets.” Connecting biometric verification to advertising performance introduces an incentive structure where platforms may favor or prioritize verified users. Over time, this can influence how content is distributed and how users are treated within digital ecosystems. Integrations across major platforms The system is being embedded into a range of widely used services: Zoom is adding a feature called Deep Face, which compares a live video feed to a cryptographically signed image captured during Orb verification. Hosts can require participants to pass a “Deep Face Waiting Room,” and users can request checks during calls, adding a “Verified Human” badge. DocuSign plans to integrate World ID into its document signing process, linking identity verification with legally binding agreements. Match Group’s Tinder now offers global integration, allowing users to display a verified badge and receive temporary in-app perks. Okta is developing a “Human Principal” system, with World ID used to confirm that automated actions are tied to a real person. Vercel has integrated verification steps into developer workflows, allowing identity checks to be logged and audited. This integration appeared shortly before reports of a security breach affecting the platform, drawing attention to the sensitivity of systems that centralize identity data. Browserbase and Exa are incorporating World ID to distinguish verified agents, offering reduced friction and additional access tied to confirmed human identities. These integrations position identity verification as a condition for participation across services rather than a background process. Ticketing and offline use cases The system is also extending into physical-world scenarios. A “Concert Kit” tool enables platforms such as Ticketmaster and AXS to reserve tickets for individuals who have verified their identity. Linking biometric verification to ticket access connects identity status with participation in high-demand events, shaping how access is allocated. The organization has outlined 13 industries where it believes its system should be deployed, including social media, eCommerce, banking, government services, and travel. Across these sectors, identity verification is presented as a response to bots, fraud, and misuse. At the same time, it introduces a persistent identifier that can operate across multiple domains. Examples include: Advertising, addressing “fake impressions and clicks.” Dating, targeting fraudulent profiles and scams. Government services, framed as a tool against benefits fraud. Each use case depends on linking activity to a verified individual, reducing separation between different areas of a person’s digital life. A system built on biometric permanence The process is based on the Orb scan. Unlike passwords or usernames, biometric identifiers cannot be changed if exposed. Even where systems state that only derived or encrypted data is stored, the initial capture remains a critical point of sensitivity. The expansion strategy outlines a future where access to services, platform visibility, and pricing structures may depend on whether a person submits biometric data. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Sam Altman’s World ID Expands Biometric Identity Checks appeared first on Reclaim The Net.

Judge Blocks Government Pressure on Apple, Meta Over ICE Tracking
Favicon 
reclaimthenet.org

Judge Blocks Government Pressure on Apple, Meta Over ICE Tracking

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A federal judge has ruled that the federal government likely violated the First Amendment when it strong-armed Apple and Facebook into deleting tools that let the public track ICE activity. The preliminary injunction, issued by Judge Jorge L. Alonso of the Northern District of Illinois, halts the government’s coercion of the platforms and lets the creators of the “ICE Sightings – Chicagoland” Facebook group and the Eyes Up app move forward with their case. We obtained a copy of the order for you here. The plaintiffs are Kassandra Rosado, who ran the Facebook group from her Chicago small business community, and Kreisau Group, which built the Eyes Up app to archive video evidence of government activity. Both projects collected publicly available information about ICE operations. Both were deleted within hours of senior federal officials publicly demanding their removal. Alonso’s opinion treats what happened here as the textbook case of indirect censorship the Supreme Court warned about in last year’s NRA v. Vullo decision. Officials with no direct regulatory authority over a speaker can still silence that speaker by leaning on the intermediaries who carry the speech. That’s what the judge found here. Former Attorney General Pam Bondi and former DHS Secretary Kristi Noem didn’t pass a law or issue a subpoena. They made demands, took credit for the deletions, and dropped reminders that prosecution was on the table. Apple independently reviewed Eyes Up in August 2025, knew what the app did, and approved it. On October 2, Bondi told Fox News that “We reached out to Apple today demanding they remove the ICEBlock app from their App Store – and Apple did so.” Around the same date, Apple removed Eyes Up along with ICEBlock and Red Dot, now citing a rule against “mean-spirited content” that had somehow not applied to the same app a month earlier. The Facebook deletion followed the same pattern. Rosado’s group had nearly 100,000 members by October 2025, most of them small business owners and neighbors sharing information as ICE ran an enforcement surge called “Operation Midway Blitz” through Chicago. Of thousands of posts and tens of thousands of comments, Facebook’s own moderators had flagged five items across the group’s entire existence. Facebook told Rosado at the time that these were member violations that “don’t hurt your group” and that groups don’t get disabled unless moderators themselves produce or approve prohibited content. Alonso wasn’t subtle about what the officials’ public statements signaled to the platforms. He quoted Noem’s warning that “We will prosecute those who dox our agents to the fullest extent of the law” alongside her demand that Facebook be “PROACTIVE” in policing such content. He quoted Bondi’s line that “We’re not going to stop at just arresting the violent criminals we can see in the streets.” The judge called these “thinly veiled threats” and reached back to the Supreme Court’s 1963 Bantam Books decision for the point that “People do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around.” The government didn’t need to write a censorship law because it had access to a phone. Apple and Facebook both run highly regulated businesses with enormous legal exposure. When the Attorney General says she wants content gone, the cost of saying no runs through antitrust review, DOJ investigations, and future cooperation with federal agencies. The platforms did what rational risk-averse companies do under that kind of pressure. They deleted the speech and invoked vague content rules to dress it up as their own decision. The Apple “mean-spirited content” rationale is the clearest example of how those vague rules function in this environment. Guideline 1.1.1 covers whatever Apple decides it covers on a given day. In August, Eyes Up met the guideline. In October, after Bondi called, it didn’t. Nothing about the app had changed. What changed was who was watching and what they could do to Apple if the app stayed up. Courts have held for decades that filming and documenting police activity in public is protected First Amendment speech. Alonso cited rulings from the First, Third, Fifth, Seventh, Ninth, Tenth, and Eleventh Circuits all reaching the same conclusion. Warning neighbors about law enforcement operations, recording agents in public, and collecting that footage into a shared archive are the kind of activities the First Amendment was designed to shelter. The government’s workaround here was to avoid the First Amendment by outsourcing the deletion to a private company that the government could hurt. Alonso rejected the workaround. He found that the plaintiffs’ injury was traceable to government coercion rather than platform discretion, pointing to three facts in sequence. Facebook and Apple had already reviewed and approved the content. They reversed course immediately after officials contacted them. The officials then publicly claimed credit for the reversals. That’s enough of a pattern, the judge ruled, to support a finding that the platforms were responding to federal pressure rather than exercising independent judgment. FIRE, which represents Rosado and Kreisau Group, is defending the case. The organization said it was “extremely encouraged by this ruling” and that “Even though it’s not the end of the case, it bodes well for the future of our legal fight to ensure that the First Amendment protects the right to discuss, record, and criticize what law enforcement does in public.” The case is now styled Rosado v. Blanche after Todd Blanche replaced Bondi as Acting Attorney General and Markwayne Mullin replaced Noem at DHS. The preliminary injunction stops federal officials from continuing to pressure platforms to delete the plaintiffs’ content. It does not, by itself, force Apple or Facebook to restore the app or the group. That decision sits with the companies, which now have legal cover to say yes where they previously had political cover to say no. The chilling effect question hangs over what comes next. The Eyes Up app has been unavailable on the App Store for more than six months. The Chicagoland Facebook group’s nearly 100,000 members have been scattered across other platforms or dropped out of the conversation entirely. Whatever happens at trial, the government’s demand achieved its immediate purpose for half a year. The speech stopped. The information didn’t circulate. The people who relied on it had to find other ways to share what they saw, or stop looking. The ruling is important as precedent because the jawboning tactic it addresses has become routine across administrations. Both parties have developed the habit of treating Silicon Valley moderation teams as an extension of federal policy, using public pressure and backchannel calls to accomplish what direct regulation couldn’t. The asymmetry with Murthy v. Missouri deserves careful attention, because the procedural parallels cut against any easy story about the two cases. Rosado is a preliminary injunction at the district court. Alonso ruled that the plaintiffs are likely to succeed on the merits, which is a lower bar than actually winning. The case still has to survive motions to dismiss, discovery, summary judgment, and likely an appeal before anything here is settled as law. That procedural posture is exactly where Missouri v. Biden was in July 2023, when Judge Terry Doughty of the Western District of Louisiana issued his own preliminary injunction against the Biden administration. That case was brought by the attorneys general of Missouri and Louisiana, along with several individual plaintiffs, including epidemiologists and journalists, challenging the White House, Surgeon General, CDC, FBI, and other federal agencies for pressuring social media platforms to suppress speech about COVID-19 origins, vaccine side effects, the Hunter Biden laptop story, and election integrity claims. Doughty’s ruling was broader than Alonso’s and more broad in its language, calling the Biden-era pressure campaign “arguably the most massive attack against free speech in United States’ history.” It barred a long list of federal officials and agencies from communicating with platforms about content moderation. It looked, at the time, like a decisive win for the plaintiffs challenging government jawboning. That preliminary injunction didn’t survive. The Fifth Circuit narrowed it significantly, keeping the core finding of coercion but trimming the list of officials covered and the scope of prohibited conduct. The Supreme Court then vacated what remained on standing grounds in Murthy, without ever reaching the merits of whether the Biden administration’s conduct violated the First Amendment. The entire structure of relief that the district court had put in place collapsed through the appellate process. That’s the procedural shape Rosado is likely to face if the government appeals. A district court has issued a preliminary injunction on a record that looks strong to the judge who heard it. The Seventh Circuit will get its turn, and the Supreme Court could get its turn after that. Murthy’s standing doctrine, which required plaintiffs to prove their specific injuries were traceable to specific government pressure rather than independent platform judgment, applies to Rosado just as much as it applied to its predecessor. Whether Alonso’s three-part traceability analysis holds up at higher courts is genuinely uncertain. The records in the two cases have real differences but they point in the same direction. Missouri v. Biden had extensive discovery showing a sustained institutional pressure campaign against online speech Biden said was getting people killed. White House digital director Rob Flaherty wrote to Facebook that the platform was a “top driver of vaccine hesitancy,” demanded the removal of parody accounts and vaccine humor posts, and asked the company to throttle specific content from Tucker Carlson and the New York Post. Press Secretary Jen Psaki announced from the podium that the administration was “flagging problematic posts for Facebook” and in a separate briefing linked the administration’s antitrust agenda to platforms’ handling of misinformation. The FBI ran regular meetings with platform trust-and-safety teams and sent encrypted lists of accounts and posts for removal one to five times a month. That was the record Doughty found persuasive enough for a preliminary injunction, and that the Supreme Court ultimately sidestepped. Rosado’s record is narrower in scope but tighter on the specific causal chain. Bondi and Noem publicly demanded removals and took credit in real time. Apple and Facebook reversed previous approvals within hours. The timing and the public statements make the inference of coercion harder to escape for two specific plaintiffs and two specific pieces of content. Whether that tightness is enough to clear Murthy’s standing bar at the Supreme Court level, or whether a higher court will find the same counterfactual problem that disposed of the Biden-era challenge, is the question that will actually determine what this case produces as precedent. Both cases involve government pressure on platforms to delete disfavored speech. Both produced preliminary injunctions at the district court level. One has already gone through the full appellate process and ended with the Supreme Court ducking the merits. The other is at the beginning of that journey. Treating the Rosado injunction as a vindication of First Amendment doctrine, or as a sign that the system is now working correctly against jawboning, gets ahead of what the procedural posture actually supports. It’s where Missouri v. Biden was three years ago. The Biden administration’s plaintiffs had their win taken away by the Supreme Court’s standing doctrine. The honest read of the two cases is that the First Amendment doctrine governing government coercion of intermediaries is unsettled. NRA v. Vullo established the substantive principle that jawboning violates the First Amendment. Murthy made the standing requirements to challenge it demanding enough that sustained institutional pressure campaigns through private channels may be effectively unreviewable. Rosado is testing whether unusually public coercion clears that bar. If it does, the doctrine will protect speakers against incautious officials. If it doesn’t, the doctrine will protect almost no one in practice, regardless of which party’s officials are doing the coercing. A free-speech position that takes itself seriously has to hold that both the Biden administration’s campaign and the Trump administration’s campaign are constitutional problems, and that the appropriate remedy in both cases is a merits ruling rather than a standing-based dismissal or a loud district court injunction that gets pared back on appeal. Whether Rosado produces that kind of ruling is still an open question. The answer will tell us more about the state of First Amendment doctrine than either administration’s initial conduct did. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Judge Blocks Government Pressure on Apple, Meta Over ICE Tracking appeared first on Reclaim The Net.

The Retreat of the Open Internet
Favicon 
reclaimthenet.org

The Retreat of the Open Internet

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post The Retreat of the Open Internet appeared first on Reclaim The Net.

DOJ Blocks France’s X Probe, Citing First Amendment
Favicon 
reclaimthenet.org

DOJ Blocks France’s X Probe, Citing First Amendment

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The US Justice Department has refused to help French prosecutors investigating X, sending Paris a two-page letter that amounts to a direct shot across the bow at European speech regulation. American authorities will not serve summonses, will not facilitate interviews, and will not lend their cooperation to what they describe as a foreign effort to prosecute a US company for editorial decisions protected at home. The letter, dated Friday and reviewed by the Wall Street Journal, came from the Justice Department’s Office of International Affairs. It rejected three separate French requests this year, and its language was unusually blunt. “This investigation seeks to use the criminal legal system in France to regulate a public square for the free expression of ideas and opinions in a manner contrary to the First Amendment of the United States Constitution,” the letter said. It went on to call the French requests “an effort to entangle the United States in a politically charged criminal proceeding aimed at wrongfully regulating through prosecution the business activities of a social media platform.” That is the Justice Department telling a European ally its prosecution is a speech case dressed up as a criminal case, and that the United States will not help build it. The Justice Department and French authorities did not respond to requests for comment. The French investigation began in January 2025, after a lawmaker and another official filed complaints arguing that X’s content-selection algorithm tilted toward Elon Musk’s views, and that the tilt amounted to foreign interference in France. The theory converts an editorial choice, which is what an algorithm is, into a potential crime. By July, prosecutors wanted access to the algorithm itself to examine it for bias. In November, the scope widened after reports of allegedly antisemitic posts, including Holocaust denial, which is illegal in France. In January of this year, prosecutors added the creation and distribution of child sexual abuse material and nonconsensual deepfakes to the list of potential charges, which was misleading at best. Investigators raided X’s Paris office in February. X called the search “an abusive act of law enforcement theater.” The platform is part of Musk’s artificial-intelligence firm xAI, which has now been purchased by his rocket company SpaceX. French officials then summoned Musk, former X chief executive Linda Yaccarino, and other employees for what they described as voluntary interviews. Musk’s summons was set for Monday. French prosecutors can issue arrest warrants for suspects who skip interviews, which makes the word “voluntary” do less work than it appears to. An xAI official welcomed the Justice Department’s intervention. “We are grateful to the Justice Department for rejecting this effort by a prosecutor in Paris to compel our CEO and several employees to sit for interviews,” the official said. “We hope the Parisian authorities will now come to their senses, recognize that there is no wrongdoing here, and terminate their baseless investigation.” The American pushback is important because the investigation is the sharp edge of a larger European project. Regulators across the continent are rolling out content-moderation rules with real teeth, and the Trump administration and other US officials have accused Europeans of trying to silence dissent not only on their continent but globally. Vice President JD Vance spent much of the year criticizing European speech restrictions in public speeches. Secretary of State Marco Rubio has flagged foreign prosecutions of Americans for online speech as a diplomatic concern. The letter from the Office of International Affairs turns that rhetoric into policy. What makes the French case useful to American officials is how exposed the speech-policing logic is. The investigation started with a politician unhappy about algorithmic favoritism. The serious charges, child sexual abuse material and deepfakes, were added later, on top of the original complaint. Tacking grave offenses onto a case that began as a complaint about algorithmic politics gives the prosecution cover and makes it harder to say out loud what the inquiry is actually about. The Justice Department said it anyway. The refusal also draws a line for other European governments watching. A prosecutor who wants to inspect a US platform’s recommendation algorithm for political bias, with criminal penalties attached, now knows the American government will not help deliver the paperwork. Every platform makes choices about what to amplify and what to bury. Those choices are speech. A legal theory that criminalizes the wrong choices turns algorithmic design into something prosecutors can punish after the fact, and the United States has just declined to assist in that project. The chilling effect is the reason any of this matters beyond X. A social media company that knows its algorithm can be subpoenaed, its executives summoned, and its Paris office raided will make different decisions about what to recommend and what to permit. The threat is enough. Actual convictions are not required for the behavior to change, which is why American authorities appear to have decided that refusing cooperation, publicly and in writing, is worth the diplomatic friction. The Justice Department’s refusal is narrow in a technical sense. It does not stop the French investigation. It does not prevent an arrest warrant if Musk declines to appear on Monday. What it does is put the United States on record as treating the prosecution as a speech case, refusing to let American mutual-assistance treaties be used to deliver Europeans the tools to punish American editorial decisions. For the transatlantic fight over who gets to set the rules of online speech, that is new territory. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post DOJ Blocks France’s X Probe, Citing First Amendment appeared first on Reclaim The Net.