Zuckerberg’s “Fix” for Child Safety Could End Anonymous Internet Access for Everyone

Zuckerberg spent five hours defending Instagram's design choices, and walked out having handed legislators and regulators their preferred blueprint for a national digital ID system.

Mark Zuckerberg spent more than five hours on the stand in Los Angeles Superior Court on Wednesday, testifying before a jury for the first time about claims that Meta deliberately designed Instagram to addict children.

The headline from most coverage was the spectacle: an annotated paper trail of internal emails, a 35-foot collage of the plaintiff’s Instagram posts unspooled across the courtroom, a CEO growing visibly agitated under cross-examination.

The more important story is what Wednesday’s proceedings are being used to build.

The trial is framed as a child safety case. What it is actually doing, especially through Zuckerberg’s own testimony, is laying the political and legal groundwork for mandatory identity verification across the internet.

And Zuckerberg, rather than pushing back on that outcome, offered the court his preferred implementation plan.

The “Addiction” Framing and What It Enables

The lawsuit was filed by a plaintiff identified as KGM, now 20 years old, who claims she began using Instagram at age 9 and that the platform’s design addicted her to it, worsening her mental health, contributing to anxiety, body dysmorphia, and suicidal thoughts.

TikTok and Snapchat settled before trial. Meta and Google’s YouTube remain defendants. Over 1,600 related cases are pending nationally. This is a big business. A verdict here could set the template for all of them.

The case rests on a contested scientific premise: that social media is clinically addictive and that this addiction causes measurable harm. That premise drives the legal strategy, the media coverage, and the resulting policy agenda. It deserves scrutiny that most coverage is not giving it.

The science is genuinely disputed, and we went into detail with that in a recent feature if you’re serious about understanding how these claims are created and weaponized.

None of this means the harms alleged are fabricated. It means the word “addiction” is doing heavy rhetorical and legal work, and the policy consequences flowing from that word go far beyond anything a jury in Los Angeles will decide.

“Addiction” is how you get a public health emergency. A public health emergency is how you get emergency powers and make it easier for people to overlook constitutional protections. Emergency powers applied to the internet mean mandatory access controls. And mandatory access controls on the internet mean the end of anonymous and pseudonymous speech.

More: The Gospel of the Anxious Generation

When social media is classified as a drug, access to it becomes a medical and regulatory matter. Who uses it, how, and under what conditions becomes a question for authorities rather than individuals. Regulating an addictive product and regulating speech look different on paper. The mechanisms required to enforce either look identical in practice: identity verification, access controls, and a surveillance architecture that follows users across every platform and device.

The Section 230 Workaround

The trial’s structure is worth examining separately. Section 230 of the 1996 Communications Decency Act has long shielded platforms from liability for what users post. Plaintiff’s lawyers here found a route around it: they argue that the platform itself is a defective product. The claim is not about user content but about design choices. Infinite scroll, auto-play, algorithmically amplified notifications, beauty filters linked to body dysmorphia. The lawsuit treats them like a car without brakes.

A verdict for KGM would hand plaintiffs in 1,600 other cases a tested legal theory for stripping Section 230 protection from platform design decisions. That is a significant restructuring of internet liability law, driven by trial lawyers, using a mental health crisis whose causes are still actively debated in peer-reviewed journals.

Zuckerberg was pressed with internal documents, including a 2015 estimate that 4 million users under 13 were on Instagram, roughly 30 percent of all American children aged 10 to 12. An old email from former public policy head Nick Clegg was read into the record: “The fact that we say we don’t allow under-13s on our platform, yet have no way of enforcing it, is just indefensible.” Zuckerberg acknowledged the slow progress: “I always wish that we could have gotten there sooner.”

He also told the jury: “I don’t see why this is so complicated,” when pressed on the company’s age verification policies. His proposed answer to that question is the core problem.

Zuckerberg’s Blueprint: Let Apple and Google Check Everyone’s ID

Multiple times during his testimony, Zuckerberg argued that age verification should be handled not by individual apps but at the operating system level, by Apple and Google. He told jurors that operating system providers “were better positioned to implement age verification tools, since they control the software that runs most smartphones.”

“Doing it at the level of the phone is just a lot cleaner than having every single app out there have to do this separately,” he said. He added that it “would be pretty easy for them” to implement.

Note that. Zuckerberg is not proposing that Instagram verify the ages of Instagram users. He is proposing that Apple and Google verify the identity of every smartphone user, for every app, at the OS level. Once that infrastructure exists, it does not stay limited to social media. It applies to every app on the phone. Every website accessed through that phone’s browser. Every communication sent through any app on the device.

This is more than age verification. It is a national digital ID layer baked into the two operating systems that run the overwhelming majority of the world’s smartphones.

The proposal also solves Zuckerberg’s immediate legal problem. If Apple and Google own age enforcement, platforms like Meta are no longer responsible for enforcing it. The liability shifts. The company under lawsuit in Los Angeles deflects the core allegation by pointing at Cupertino and Mountain View.

Who decides which apps require ID verification once this infrastructure exists? Apple and Google do. They would be deputized as identity gatekeepers for the internet. Two private companies, already under serious antitrust scrutiny for their control of app distribution, handed new authority over who accesses what online and under what identity.

The Regulatory Architecture Already Under Construction

Zuckerberg’s OS-level verification proposal fits neatly into a legislative agenda that was moving before he took the stand Wednesday.

California’s SB 976, the Protecting Our Kids from Social Media Addiction Act, mandates age verification systems for social media platforms in the state. The California Attorney General must finalize implementation rules by January 2027.

The Ninth Circuit has declined to rule on whether those requirements violate the First Amendment, saying it cannot assess the constitutional question until the regulations are finalized. Age verification for lawful online speech in California is advancing without a constitutional answer.

The Kids Online Safety Act (KOSA), pending at the federal level, would direct agencies to develop age verification at the device or operating system level, the same framework Zuckerberg promoted from the stand.

KOSA also carries broad definitions of “harmful” content that leave moderation decisions subject to government influence, with no independent review. Age verification and content restriction in a single bill, with the government writing the definition of harm.

New York’s SAFE For Kids Act restricts algorithmic feeds for users who don’t complete age verification. Acceptable alternatives to submitting a government ID include facial analysis that estimates age. Biometric data, collected to scroll a social media feed.

The infrastructure these laws require creates data that can be stolen, subpoenaed, and cross-referenced. A Discord breach last year exposed government-issued IDs submitted through the company’s age verification system, around 70,000 of them, with attackers claiming the number was higher. Every ID check database is a future breach waiting to happen.

Anonymous and pseudonymous speech online has real value. Whistleblowers. Abuse survivors. Political dissidents in hostile environments. People exploring medical questions or identities they are not yet ready to attach their legal names to. Journalists protecting sources. Anyone whose safety depends on a separation between their online presence and their government identity.

Mandatory identity verification at the OS level ends all of that for everyone. The stated goal is protecting 9-year-olds from Instagram. The mechanism ends anonymous internet access for every adult who owns a phone.

Zuckerberg, under oath and under pressure, handed that mechanism a high-profile public endorsement. His lawyers will use it to deflect liability. Legislators will cite it in committee hearings. The Los Angeles trial will appear in bill summaries as evidence of urgent need.

The word “addiction” started this chain. Public health emergency, emergency powers, age verification, OS-level ID checks. Each step follows from the last. Each step is presented as protecting children.

The trial continues. KGM is expected to testify later in the proceedings.


Dan Frieth

92 Blog posts

Comments