Reclaim The Net Feed
Reclaim The Net Feed

Reclaim The Net Feed

@reclaimthenetfeed

Sam Altman’s World ID Expands Biometric Identity Checks
Favicon 
reclaimthenet.org

Sam Altman’s World ID Expands Biometric Identity Checks

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A biometric identity system built on iris scans is expanding into mainstream online services while its backers outline new ways to tie verified identity to revenue generation. The initiative, led by OpenAI CEO Sam Altman, introduced its standalone World ID app in public beta on April 17. The app separates identity management from the existing World App crypto wallet and is described as a tool to “verify with platforms and services, manage your authenticators, store credentials and control how your World ID is used.” The rollout comes as the organization reports more than 18 million people across 160 countries have already been verified using its Orb devices, which scan a person’s iris to create a unique identifier. Deployment of Orb devices is increasing, with additional coverage planned across New York, Los Angeles, and San Francisco. An “Orb-on-demand” service is also being introduced, allowing individuals to schedule iris scans at locations of their choosing. This approach extends biometric collection into more varied settings. Greater accessibility may encourage uptake, though it also increases the number of environments where highly sensitive biological data is captured. At a recent event, the organization described its broader ambition as embedding its verification tech across the internet, stating the goal is to get its “proof-of-human” system into “every website and app” on the open internet. A wider push toward digital ID checks The expansion aligns with a broader movement across the tech sector toward routine identity verification. Platforms are introducing checks framed around safety, fraud prevention, and authenticity, gradually normalizing the idea that access to services may require proof of identity rather than anonymous or pseudonymous participation. More: The Age Verification Con World’s model places biometric verification at the forefront of this trend. By tying a persistent identifier to a person’s physical characteristics, the system enables repeated checks across different services without requiring separate verification processes each time. This creates a form of continuity across platforms. While presented as a way to reduce bots and misuse, it also consolidates identity into a reusable credential that can follow individuals across contexts, limiting the ability to compartmentalize online activity. Revenue model tied to identity verification The company’s financial framing links this identity layer directly to monetization. According to its own materials, World ID could increase average revenue per user by improving trust signals and conversion rates. A central proposal involves offering a “verified human” tier to advertisers, with higher pricing based on confirmed human impressions. The company states that “advertisers whose conversions come from verified humans can better measure their marketing ROI, which justifies sustained or increased spend,” and that “an ad network that can prove its impressions reach real people will command the budgets.” Connecting biometric verification to advertising performance introduces an incentive structure where platforms may favor or prioritize verified users. Over time, this can influence how content is distributed and how users are treated within digital ecosystems. Integrations across major platforms The system is being embedded into a range of widely used services: Zoom is adding a feature called Deep Face, which compares a live video feed to a cryptographically signed image captured during Orb verification. Hosts can require participants to pass a “Deep Face Waiting Room,” and users can request checks during calls, adding a “Verified Human” badge. DocuSign plans to integrate World ID into its document signing process, linking identity verification with legally binding agreements. Match Group’s Tinder now offers global integration, allowing users to display a verified badge and receive temporary in-app perks. Okta is developing a “Human Principal” system, with World ID used to confirm that automated actions are tied to a real person. Vercel has integrated verification steps into developer workflows, allowing identity checks to be logged and audited. This integration appeared shortly before reports of a security breach affecting the platform, drawing attention to the sensitivity of systems that centralize identity data. Browserbase and Exa are incorporating World ID to distinguish verified agents, offering reduced friction and additional access tied to confirmed human identities. These integrations position identity verification as a condition for participation across services rather than a background process. Ticketing and offline use cases The system is also extending into physical-world scenarios. A “Concert Kit” tool enables platforms such as Ticketmaster and AXS to reserve tickets for individuals who have verified their identity. Linking biometric verification to ticket access connects identity status with participation in high-demand events, shaping how access is allocated. The organization has outlined 13 industries where it believes its system should be deployed, including social media, eCommerce, banking, government services, and travel. Across these sectors, identity verification is presented as a response to bots, fraud, and misuse. At the same time, it introduces a persistent identifier that can operate across multiple domains. Examples include: Advertising, addressing “fake impressions and clicks.” Dating, targeting fraudulent profiles and scams. Government services, framed as a tool against benefits fraud. Each use case depends on linking activity to a verified individual, reducing separation between different areas of a person’s digital life. A system built on biometric permanence The process is based on the Orb scan. Unlike passwords or usernames, biometric identifiers cannot be changed if exposed. Even where systems state that only derived or encrypted data is stored, the initial capture remains a critical point of sensitivity. The expansion strategy outlines a future where access to services, platform visibility, and pricing structures may depend on whether a person submits biometric data. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Sam Altman’s World ID Expands Biometric Identity Checks appeared first on Reclaim The Net.

Judge Blocks Government Pressure on Apple, Meta Over ICE Tracking
Favicon 
reclaimthenet.org

Judge Blocks Government Pressure on Apple, Meta Over ICE Tracking

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. A federal judge has ruled that the federal government likely violated the First Amendment when it strong-armed Apple and Facebook into deleting tools that let the public track ICE activity. The preliminary injunction, issued by Judge Jorge L. Alonso of the Northern District of Illinois, halts the government’s coercion of the platforms and lets the creators of the “ICE Sightings – Chicagoland” Facebook group and the Eyes Up app move forward with their case. We obtained a copy of the order for you here. The plaintiffs are Kassandra Rosado, who ran the Facebook group from her Chicago small business community, and Kreisau Group, which built the Eyes Up app to archive video evidence of government activity. Both projects collected publicly available information about ICE operations. Both were deleted within hours of senior federal officials publicly demanding their removal. Alonso’s opinion treats what happened here as the textbook case of indirect censorship the Supreme Court warned about in last year’s NRA v. Vullo decision. Officials with no direct regulatory authority over a speaker can still silence that speaker by leaning on the intermediaries who carry the speech. That’s what the judge found here. Former Attorney General Pam Bondi and former DHS Secretary Kristi Noem didn’t pass a law or issue a subpoena. They made demands, took credit for the deletions, and dropped reminders that prosecution was on the table. Apple independently reviewed Eyes Up in August 2025, knew what the app did, and approved it. On October 2, Bondi told Fox News that “We reached out to Apple today demanding they remove the ICEBlock app from their App Store – and Apple did so.” Around the same date, Apple removed Eyes Up along with ICEBlock and Red Dot, now citing a rule against “mean-spirited content” that had somehow not applied to the same app a month earlier. The Facebook deletion followed the same pattern. Rosado’s group had nearly 100,000 members by October 2025, most of them small business owners and neighbors sharing information as ICE ran an enforcement surge called “Operation Midway Blitz” through Chicago. Of thousands of posts and tens of thousands of comments, Facebook’s own moderators had flagged five items across the group’s entire existence. Facebook told Rosado at the time that these were member violations that “don’t hurt your group” and that groups don’t get disabled unless moderators themselves produce or approve prohibited content. Alonso wasn’t subtle about what the officials’ public statements signaled to the platforms. He quoted Noem’s warning that “We will prosecute those who dox our agents to the fullest extent of the law” alongside her demand that Facebook be “PROACTIVE” in policing such content. He quoted Bondi’s line that “We’re not going to stop at just arresting the violent criminals we can see in the streets.” The judge called these “thinly veiled threats” and reached back to the Supreme Court’s 1963 Bantam Books decision for the point that “People do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around.” The government didn’t need to write a censorship law because it had access to a phone. Apple and Facebook both run highly regulated businesses with enormous legal exposure. When the Attorney General says she wants content gone, the cost of saying no runs through antitrust review, DOJ investigations, and future cooperation with federal agencies. The platforms did what rational risk-averse companies do under that kind of pressure. They deleted the speech and invoked vague content rules to dress it up as their own decision. The Apple “mean-spirited content” rationale is the clearest example of how those vague rules function in this environment. Guideline 1.1.1 covers whatever Apple decides it covers on a given day. In August, Eyes Up met the guideline. In October, after Bondi called, it didn’t. Nothing about the app had changed. What changed was who was watching and what they could do to Apple if the app stayed up. Courts have held for decades that filming and documenting police activity in public is protected First Amendment speech. Alonso cited rulings from the First, Third, Fifth, Seventh, Ninth, Tenth, and Eleventh Circuits all reaching the same conclusion. Warning neighbors about law enforcement operations, recording agents in public, and collecting that footage into a shared archive are the kind of activities the First Amendment was designed to shelter. The government’s workaround here was to avoid the First Amendment by outsourcing the deletion to a private company that the government could hurt. Alonso rejected the workaround. He found that the plaintiffs’ injury was traceable to government coercion rather than platform discretion, pointing to three facts in sequence. Facebook and Apple had already reviewed and approved the content. They reversed course immediately after officials contacted them. The officials then publicly claimed credit for the reversals. That’s enough of a pattern, the judge ruled, to support a finding that the platforms were responding to federal pressure rather than exercising independent judgment. FIRE, which represents Rosado and Kreisau Group, is defending the case. The organization said it was “extremely encouraged by this ruling” and that “Even though it’s not the end of the case, it bodes well for the future of our legal fight to ensure that the First Amendment protects the right to discuss, record, and criticize what law enforcement does in public.” The case is now styled Rosado v. Blanche after Todd Blanche replaced Bondi as Acting Attorney General and Markwayne Mullin replaced Noem at DHS. The preliminary injunction stops federal officials from continuing to pressure platforms to delete the plaintiffs’ content. It does not, by itself, force Apple or Facebook to restore the app or the group. That decision sits with the companies, which now have legal cover to say yes where they previously had political cover to say no. The chilling effect question hangs over what comes next. The Eyes Up app has been unavailable on the App Store for more than six months. The Chicagoland Facebook group’s nearly 100,000 members have been scattered across other platforms or dropped out of the conversation entirely. Whatever happens at trial, the government’s demand achieved its immediate purpose for half a year. The speech stopped. The information didn’t circulate. The people who relied on it had to find other ways to share what they saw, or stop looking. The ruling is important as precedent because the jawboning tactic it addresses has become routine across administrations. Both parties have developed the habit of treating Silicon Valley moderation teams as an extension of federal policy, using public pressure and backchannel calls to accomplish what direct regulation couldn’t. The asymmetry with Murthy v. Missouri deserves careful attention, because the procedural parallels cut against any easy story about the two cases. Rosado is a preliminary injunction at the district court. Alonso ruled that the plaintiffs are likely to succeed on the merits, which is a lower bar than actually winning. The case still has to survive motions to dismiss, discovery, summary judgment, and likely an appeal before anything here is settled as law. That procedural posture is exactly where Missouri v. Biden was in July 2023, when Judge Terry Doughty of the Western District of Louisiana issued his own preliminary injunction against the Biden administration. That case was brought by the attorneys general of Missouri and Louisiana, along with several individual plaintiffs, including epidemiologists and journalists, challenging the White House, Surgeon General, CDC, FBI, and other federal agencies for pressuring social media platforms to suppress speech about COVID-19 origins, vaccine side effects, the Hunter Biden laptop story, and election integrity claims. Doughty’s ruling was broader than Alonso’s and more broad in its language, calling the Biden-era pressure campaign “arguably the most massive attack against free speech in United States’ history.” It barred a long list of federal officials and agencies from communicating with platforms about content moderation. It looked, at the time, like a decisive win for the plaintiffs challenging government jawboning. That preliminary injunction didn’t survive. The Fifth Circuit narrowed it significantly, keeping the core finding of coercion but trimming the list of officials covered and the scope of prohibited conduct. The Supreme Court then vacated what remained on standing grounds in Murthy, without ever reaching the merits of whether the Biden administration’s conduct violated the First Amendment. The entire structure of relief that the district court had put in place collapsed through the appellate process. That’s the procedural shape Rosado is likely to face if the government appeals. A district court has issued a preliminary injunction on a record that looks strong to the judge who heard it. The Seventh Circuit will get its turn, and the Supreme Court could get its turn after that. Murthy’s standing doctrine, which required plaintiffs to prove their specific injuries were traceable to specific government pressure rather than independent platform judgment, applies to Rosado just as much as it applied to its predecessor. Whether Alonso’s three-part traceability analysis holds up at higher courts is genuinely uncertain. The records in the two cases have real differences but they point in the same direction. Missouri v. Biden had extensive discovery showing a sustained institutional pressure campaign against online speech Biden said was getting people killed. White House digital director Rob Flaherty wrote to Facebook that the platform was a “top driver of vaccine hesitancy,” demanded the removal of parody accounts and vaccine humor posts, and asked the company to throttle specific content from Tucker Carlson and the New York Post. Press Secretary Jen Psaki announced from the podium that the administration was “flagging problematic posts for Facebook” and in a separate briefing linked the administration’s antitrust agenda to platforms’ handling of misinformation. The FBI ran regular meetings with platform trust-and-safety teams and sent encrypted lists of accounts and posts for removal one to five times a month. That was the record Doughty found persuasive enough for a preliminary injunction, and that the Supreme Court ultimately sidestepped. Rosado’s record is narrower in scope but tighter on the specific causal chain. Bondi and Noem publicly demanded removals and took credit in real time. Apple and Facebook reversed previous approvals within hours. The timing and the public statements make the inference of coercion harder to escape for two specific plaintiffs and two specific pieces of content. Whether that tightness is enough to clear Murthy’s standing bar at the Supreme Court level, or whether a higher court will find the same counterfactual problem that disposed of the Biden-era challenge, is the question that will actually determine what this case produces as precedent. Both cases involve government pressure on platforms to delete disfavored speech. Both produced preliminary injunctions at the district court level. One has already gone through the full appellate process and ended with the Supreme Court ducking the merits. The other is at the beginning of that journey. Treating the Rosado injunction as a vindication of First Amendment doctrine, or as a sign that the system is now working correctly against jawboning, gets ahead of what the procedural posture actually supports. It’s where Missouri v. Biden was three years ago. The Biden administration’s plaintiffs had their win taken away by the Supreme Court’s standing doctrine. The honest read of the two cases is that the First Amendment doctrine governing government coercion of intermediaries is unsettled. NRA v. Vullo established the substantive principle that jawboning violates the First Amendment. Murthy made the standing requirements to challenge it demanding enough that sustained institutional pressure campaigns through private channels may be effectively unreviewable. Rosado is testing whether unusually public coercion clears that bar. If it does, the doctrine will protect speakers against incautious officials. If it doesn’t, the doctrine will protect almost no one in practice, regardless of which party’s officials are doing the coercing. A free-speech position that takes itself seriously has to hold that both the Biden administration’s campaign and the Trump administration’s campaign are constitutional problems, and that the appropriate remedy in both cases is a merits ruling rather than a standing-based dismissal or a loud district court injunction that gets pared back on appeal. Whether Rosado produces that kind of ruling is still an open question. The answer will tell us more about the state of First Amendment doctrine than either administration’s initial conduct did. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Judge Blocks Government Pressure on Apple, Meta Over ICE Tracking appeared first on Reclaim The Net.

The Retreat of the Open Internet
Favicon 
reclaimthenet.org

The Retreat of the Open Internet

This Post is for Paid Supporters Reclaim your digital freedom. Get the latest on censorship and surveillance, and learn how to fight back. SUBSCRIBE Already a supporter? Sign In. (If you’re already logged in but still seeing this, refresh this page to show the post.) The post The Retreat of the Open Internet appeared first on Reclaim The Net.

DOJ Blocks France’s X Probe, Citing First Amendment
Favicon 
reclaimthenet.org

DOJ Blocks France’s X Probe, Citing First Amendment

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The US Justice Department has refused to help French prosecutors investigating X, sending Paris a two-page letter that amounts to a direct shot across the bow at European speech regulation. American authorities will not serve summonses, will not facilitate interviews, and will not lend their cooperation to what they describe as a foreign effort to prosecute a US company for editorial decisions protected at home. The letter, dated Friday and reviewed by the Wall Street Journal, came from the Justice Department’s Office of International Affairs. It rejected three separate French requests this year, and its language was unusually blunt. “This investigation seeks to use the criminal legal system in France to regulate a public square for the free expression of ideas and opinions in a manner contrary to the First Amendment of the United States Constitution,” the letter said. It went on to call the French requests “an effort to entangle the United States in a politically charged criminal proceeding aimed at wrongfully regulating through prosecution the business activities of a social media platform.” That is the Justice Department telling a European ally its prosecution is a speech case dressed up as a criminal case, and that the United States will not help build it. The Justice Department and French authorities did not respond to requests for comment. The French investigation began in January 2025, after a lawmaker and another official filed complaints arguing that X’s content-selection algorithm tilted toward Elon Musk’s views, and that the tilt amounted to foreign interference in France. The theory converts an editorial choice, which is what an algorithm is, into a potential crime. By July, prosecutors wanted access to the algorithm itself to examine it for bias. In November, the scope widened after reports of allegedly antisemitic posts, including Holocaust denial, which is illegal in France. In January of this year, prosecutors added the creation and distribution of child sexual abuse material and nonconsensual deepfakes to the list of potential charges, which was misleading at best. Investigators raided X’s Paris office in February. X called the search “an abusive act of law enforcement theater.” The platform is part of Musk’s artificial-intelligence firm xAI, which has now been purchased by his rocket company SpaceX. French officials then summoned Musk, former X chief executive Linda Yaccarino, and other employees for what they described as voluntary interviews. Musk’s summons was set for Monday. French prosecutors can issue arrest warrants for suspects who skip interviews, which makes the word “voluntary” do less work than it appears to. An xAI official welcomed the Justice Department’s intervention. “We are grateful to the Justice Department for rejecting this effort by a prosecutor in Paris to compel our CEO and several employees to sit for interviews,” the official said. “We hope the Parisian authorities will now come to their senses, recognize that there is no wrongdoing here, and terminate their baseless investigation.” The American pushback is important because the investigation is the sharp edge of a larger European project. Regulators across the continent are rolling out content-moderation rules with real teeth, and the Trump administration and other US officials have accused Europeans of trying to silence dissent not only on their continent but globally. Vice President JD Vance spent much of the year criticizing European speech restrictions in public speeches. Secretary of State Marco Rubio has flagged foreign prosecutions of Americans for online speech as a diplomatic concern. The letter from the Office of International Affairs turns that rhetoric into policy. What makes the French case useful to American officials is how exposed the speech-policing logic is. The investigation started with a politician unhappy about algorithmic favoritism. The serious charges, child sexual abuse material and deepfakes, were added later, on top of the original complaint. Tacking grave offenses onto a case that began as a complaint about algorithmic politics gives the prosecution cover and makes it harder to say out loud what the inquiry is actually about. The Justice Department said it anyway. The refusal also draws a line for other European governments watching. A prosecutor who wants to inspect a US platform’s recommendation algorithm for political bias, with criminal penalties attached, now knows the American government will not help deliver the paperwork. Every platform makes choices about what to amplify and what to bury. Those choices are speech. A legal theory that criminalizes the wrong choices turns algorithmic design into something prosecutors can punish after the fact, and the United States has just declined to assist in that project. The chilling effect is the reason any of this matters beyond X. A social media company that knows its algorithm can be subpoenaed, its executives summoned, and its Paris office raided will make different decisions about what to recommend and what to permit. The threat is enough. Actual convictions are not required for the behavior to change, which is why American authorities appear to have decided that refusing cooperation, publicly and in writing, is worth the diplomatic friction. The Justice Department’s refusal is narrow in a technical sense. It does not stop the French investigation. It does not prevent an arrest warrant if Musk declines to appear on Monday. What it does is put the United States on record as treating the prosecution as a speech case, refusing to let American mutual-assistance treaties be used to deliver Europeans the tools to punish American editorial decisions. For the transatlantic fight over who gets to set the rules of online speech, that is new territory. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post DOJ Blocks France’s X Probe, Citing First Amendment appeared first on Reclaim The Net.

Canada’s Carney Revives Online Censorship Bill
Favicon 
reclaimthenet.org

Canada’s Carney Revives Online Censorship Bill

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. Canada’s Liberal government is preparing to revive legislation that would hand the state new powers over what Canadians can say online, with Prime Minister Mark Carney’s team signaling that a rebooted “online harms” law is coming. A report submitted to the Senate social affairs committee confirms the direction. The Department of Industry told senators that Ottawa is working toward a “future online safety regime” aimed at reducing online “harms,” a category the government itself gets to define. To shape the proposal, officials have brought back the Expert Advisory Group on Online Safety, the same body that helped design the previous censorship attempt. “To advise on this proposal, the government has recently reconvened the Expert Advisory Group on Online Safety, whose members previously contributed to the development of online harms legislation, to engage on new and emerging issues related to online harms,” the department said. “Any future legislative proposal would be subject to parliamentary scrutiny, and details will be made public at the appropriate time.” One of the members back at the table is Bernie Farber of the Canadian Anti-Hate Network. The advisory group helps shape what the government will treat as hateful, harmful, or dangerous. That definition, once written into law, determines which posts get deleted, which accounts get silenced, and which Canadians face fines or house arrest for saying the wrong thing online. Canadian Culture Minister Marc Miller telegraphed the timing this week, suggesting a new law targeting “online harms” is needed and likely coming soon. With the Liberals now holding a majority after three byelection wins and the defection of five MPs from the Conservatives and NDP, the procedural obstacles that killed previous attempts have largely disappeared. A social media ban for children is also on the table. The last attempt, Bill C-63, known as the Online Harms Act, was introduced under the familiar justification of protecting children from online exploitation. The bill died when former Prime Minister Justin Trudeau called the 2025 federal election. Its actual reach went well beyond child safety. It targeted lawful internet content that authorities deemed “likely to foment detestation or vilification of an individual or group,” wording broad enough to sweep up political argument, satire, religious commentary, and journalism, depending on who was reading it. Breaking the rule carried fines of up to $70,000 or house arrest. Before C-63 there was Bill C-36, a 2021 effort to amend the Criminal Code along similar lines. Neither bill made it through. Both kept returning in slightly different forms. The Justice Centre for Constitutional Freedoms, Canada’s leading constitutional freedom organization, has launched a national campaign urging the Carney government to abandon the project entirely. The JCCF warned that the Online Harms Act would “dramatically expand government censorship powers, punish lawful expression online, and authorize preemptive restrictions on individual liberty.” “In doing so, it would represent a fundamental departure from Canada’s long-standing commitment to freedom of expression and due process,” the organization said. Preemptive restrictions, the legal mechanism the previous bill contained, mean punishing or silencing someone before they have said anything unlawful. Canadian courts have historically treated prior restraint as the most serious form of speech suppression. The revived framework appears to contemplate it as a feature. The chilling effect is already setting in. Writers, commentators, and small publishers in Canada began adjusting what they posted during the C-63 debate, well before any law took effect. The threat alone was enough to quiet a portion of online political speech. A reintroduced bill, backed by a majority government and an advisory panel stacked with people who see the internet as a venue that needs controlling, makes that quieting louder. The Liberal government has said repeatedly that some version of Bill C-63 is coming back. What it has not said, in any substantive form, is who decides what counts as hate, what counts as harm, and what counts as the kind of speech a democracy is supposed to tolerate even when it finds it ugly. Those definitions will sit with the same government promising the law, and the same advisory group promising to help write it. If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net. The post Canada’s Carney Revives Online Censorship Bill appeared first on Reclaim The Net.