
A major courtroom battle in Los Angeles is testing how far the United States will go in blaming technology companies for the mental health struggles of young users and whether censorship or digital ID mandate pressure grows.
The case involves a 19-year-old woman from California, identified as K.G.M., who says she became hooked on social media platforms as a teenager and that the companies behind them deliberately built products to keep her and millions of others scrolling endlessly.
Her lawsuit targets Meta, ByteDance, and Google, accusing them of negligence and of knowingly designing addictive systems through Facebook, Instagram, YouTube.
More: California Content Law Design Code Faces Free Speech Clash
Snapchat and TikTok were also named early on but settled already without disclosing terms. What makes this trial significant is not only its focus on social media design but its potential to erode the long-standing legal protections that have kept tech firms largely immune from responsibility for what users post online.
Thousands of similar claims are waiting behind this one. Most were folded into a broader judicial proceeding, from which three representative lawsuits involving K.G.M., R.K.C., and Moore were chosen to move forward.
The outcomes could set the stage for how future cases are handled or whether mass settlements emerge. Jury selection for K.G.M.’s trial begins this week in Los Angeles County Superior Court.
Key to the plaintiff’s case is a strategy that attempts to sidestep Section 230 of the Communications Decency Act, the statute that shields online services from liability for third-party speech.
K.G.M.’s attorneys insist their case is not about user posts but about the deliberate engineering of algorithms, notification systems, and design loops that keep people engaged.
Judge Carolyn Kuhl has already dismissed part of the complaint on Section 230 grounds, particularly those linked to TikTok’s so-called “challenge” videos.
Yet she allowed the broader negligence claims to continue, writing that “there is evidence in the record that K.G.M. was harmed by design features” and that “the cause of K.G.M.’s harms is a disputed factual question that must be resolved by the jury.”
If a jury finds that design alone can constitute a harmful product, it would mark a turning point in how internet services are regulated and open the door to litigation targeting almost any platform that uses engagement-based algorithms.
Attorneys for YouTube and the other defendants argue that the lawsuit cannot logically separate design from the content it delivers.
In one clip shown to the court, K.G.M. said, “I have gotten a lot of content promoting…like body checking, posts [of] what I eat in a day, just a cucumber, making people feel bad if they don’t eat like that.”
That admission, defense lawyers contend, proves the problem lies with user-created material rather than interface mechanics. What appears to disturb or influence someone online is still a protected expression, not a design flaw.
Once design becomes the target of liability, the distinction between engineering and editorial control collapses. If a company can be sued for making a product that amplifies legal speech, its safest response would be to restrict speech itself.
Even setting the free speech question aside, the evidentiary burden is formidable. The jury must decide how much of K.G.M.’s distress stems from the apps themselves, how much from the content she consumed, and how much from unrelated life factors.
These efforts share a common idea: that platforms should be held responsible for the time and attention people give them. Yet if courts accept that premise, nearly every digital service, including news sites, streaming platforms, and shopping apps, could be accused of “addiction by design,” and bills to address online “harm” through censorship and digital ID mandates will pile up.

