For most of the past decade, suing a social media company for harms caused by its platform was a legal dead end. Section 230 of the Communications Decency Act provided near-total immunity to platforms for content posted by third parties, and courts interpreted that immunity broadly. Tech companies won case after case.

That era may be ending.

Two juries in the spring of 2026 have delivered verdicts that, taken together, suggest American courts are beginning to find a pathway through platform immunity — and holding social media companies responsible for the design choices that made their platforms harmful.


Verdict One: $375 Million in New Mexico

On March 24, 2026, a New Mexico jury returned a verdict in a lawsuit brought by the state’s Attorney General against Meta Platforms.

The case centered on Facebook, Instagram, and WhatsApp’s role in enabling child sexual exploitation — specifically the state’s allegations that Meta had knowingly created platforms that facilitated predatory contact with minors, misled users about safety, and failed to implement reasonable protections.

The jury found Meta liable on all counts, including:

  • Unfair and deceptive trade practices — Meta’s representations about child safety on its platforms were found to be false
  • Unconscionable trade practices — the company’s conduct was found to shock the conscience
  • Enabling child sexual exploitation — the platforms’ design facilitated predatory contact with minors

The damages calculation was striking: jurors found there were thousands of individual violations, each counted separately, producing a total penalty of $375 million.

New Mexico became the first state in the nation to prevail at trial against a major tech company for harms caused to young people. Attorney General Raúl Torrez called it “a landmark verdict that proves platforms can no longer hide behind legal shields while children suffer.”

Meta announced it would appeal.


Verdict Two: Meta and YouTube Liable for Addiction

Less than three weeks after the New Mexico verdict, a California jury in Los Angeles returned a verdict in a separate case — this one a civil lawsuit brought on behalf of a teenage plaintiff identified only by her initials, K.G.M.

This case tested a different theory of liability: not that the platforms enabled third-party predators, but that the platforms themselves — their recommendation algorithms, their notification systems, their engagement-maximizing design features — were negligently designed in ways that caused harm to the plaintiff.

The jury found:

  • Both Meta (Instagram) and Google (YouTube) were negligent in the design or operation of their platforms
  • Their negligence was a substantial factor in causing harm to K.G.M.
  • Both companies failed to adequately warn users of the dangers of their platforms

Damages were awarded at $6 million — with Meta found 70% responsible and YouTube 30% responsible. The asymmetry reflects Instagram’s more central role in the plaintiff’s online life.

Both companies announced plans to appeal.


Why Section 230 Didn’t Protect Them This Time

Section 230 immunity was the tech industry’s most reliable shield for twenty-five years. It says that platforms cannot be treated as the publisher or speaker of third-party content. Courts used it to dismiss most suits against platforms almost automatically.

Both verdicts found paths around it — and those paths matter enormously.

The design defect theory — used in the California case — argues that the lawsuit isn’t about content posted by users. It’s about the platform’s own engineering decisions: the algorithmic recommendation system, the autoplay feature, the notification scheduling, the follower metrics. Platforms make these choices. They’re not third-party content. Section 230 doesn’t protect a company from liability for its own product design.

The state consumer protection theory — used in the New Mexico case — argues that when a company makes representations about its platform’s safety and those representations are false, that’s a consumer protection violation independent of any content question. The deception is Meta’s own speech, not a third party’s.

Courts have been developing these theories for years. These verdicts are the first time they’ve been tested before juries at trial — and the juries found them convincing.


What the Verdicts Mean for Pending Litigation

There are currently thousands of cases pending against social media companies from families across the country. Many are consolidated in federal multidistrict litigation. The plaintiffs’ theories are substantially similar to those tested in New Mexico and California.

Prior to these verdicts, those cases faced enormous pressure to settle cheaply because of the perception that platforms were effectively litigation-proof. The verdicts change that dynamic.

For the first time, there is jury-validated precedent that:

  • A state can take a social media company to trial and win on child harm claims
  • Negligent platform design is a viable theory that survives Section 230
  • Juries will award substantial damages

Settlement dynamics in the broader MDL litigation will shift accordingly. Platforms that were holding firm may become significantly more willing to settle — and at higher amounts.


The Political Dimension

The timing of these verdicts intersects with an active congressional debate about whether Section 230 should be reformed, curtailed, or eliminated.

Roll Call reported on April 20 that the social media verdicts were energizing legislators who had been working on online safety bills but struggling to build momentum. The argument was simple: if juries were already finding pathways through Section 230 on design defect theories, Congress could clarify and strengthen those pathways through legislation rather than leaving it to case-by-case judicial evolution.

Bills under consideration include reforms that would explicitly remove Section 230 immunity for algorithmic amplification decisions — meaning that while a platform might still be protected for hosting user content, it would not be protected for the decision to recommend that content to vulnerable users.

Whether any of this legislation passes depends on the political math. Tech companies employ substantial lobbying operations. But the $375 million verdict from New Mexico gives reformers a powerful concrete example.


What This Means for Parents and Young Users

The verdicts don’t directly change how the platforms work today. Meta’s and YouTube’s algorithms continue to function as they have. The appeals process will take years.

But the verdict sends a signal: the legal theory that platforms are unreachable has been tested and has failed. That changes the incentive structure for future design decisions.

In the meantime, the platforms as they currently exist are the same ones juries found to be negligently designed. If you have teenagers who use Instagram or YouTube, the safeguards need to come from you, not from the platforms:

  • Disable autoplay on YouTube — this single feature is responsible for pulling users down rabbit holes of increasingly extreme content
  • Use Instagram’s supervision tools — they’re limited, but they include time limits and restrict recommendation feeds
  • Have explicit conversations about algorithmic design — teenagers who understand that recommendation systems are designed to maximize engagement, not wellbeing, are better equipped to recognize the manipulation
  • Set real limits on session time — the platforms are engineered to defeat the user’s intention to stop. External time controls (Screen Time on iOS, Digital Wellbeing on Android) work around the platform’s design

The legal accountability these verdicts represent is real. But it will take years to translate into meaningful platform reform. In the meantime, the algorithm doesn’t pause for the appeals process.