At last! Juries hold Meta and Google to account
The AI industry should feel put on notice
Last week, a Los Angeles jury found Meta and YouTube liable for the harm caused by their platforms to a young woman who began using them as a child. They were ordered to pay $6 million in combined damages - $3 million compensatory, with a further $3 million punitive. Meta is liable for 70%; YouTube 30%.¹ And that’s not all…the day before, a separate jury in New Mexico ordered Meta to pay $375 million for failing to protect children from predators on Instagram and Facebook.²
When I read the verdicts, all I could think was: It’s about f***ing time.
It is about time that these companies were legally held responsible for the things they’ve been building. The juries decided not to let them get away with saying they’re building something neutral, that they’re building in safeguards - ultimately shifting the blame onto users for the content they create. This is a good thing.
But the reason this verdict matters goes way beyond social media.
A deliberate legal strategy
The legal strategy pursued in the LA case was deliberate. The plaintiff’s lawyers did not focus on the content posted to these platforms. They focused on how the platforms were designed - the recommendation algorithms and the engagement-optimising mechanics that learn what hooks you and then serve you more of it.
The jury said the design itself - the architecture of these products - was a substantial factor in causing harm.³
For years, the tech industry has relied on a familiar argument: technology is neutral. Social media is just a tool. The harm comes from the people who misuse it. In the US, Section 230 of the Communications Decency Act has historically shielded platforms from liability for user-generated content.⁴ The UK’s Online Safety Act and the EU’s Digital Services Act take a different approach, placing more responsibility on platforms for the systems they operate - but even under those frameworks, the focus has been on content moderation rather than on the design of the systems themselves.
This verdict sidesteps the content argument entirely. It says: we are talking about what you built. You designed systems to maximise engagement. You knew those systems were causing harm. Your own internal documents showed you knew. You are liable for your design choices.
The same playbook, different technology
Bear with me here, because I’m thinking out loud now.
Everything the jury identified in Meta’s and YouTube’s design choices - the algorithmic personalisation, the engagement optimisation, the extraction of user data to feed systems that serve commercial interests over user wellbeing, the deliberate absence of safeguards for vulnerable users - it’s all present in AI. And in some cases, amplified.
AI chatbots with memory are a clear example. The more you use them, the more personalised the responses become. The models increasingly reflect your own thought patterns back at you. That is incredibly intoxicating. It is also, by design, the kind of feedback loop that makes it harder to step away.
We have already seen where this leads. Character.AI has now settled lawsuits brought by families of teenagers who died by suicide after forming deep emotional dependencies on AI chatbots.⁵ For the uninitiated, a 14-year-old boy was messaging a bot in the moments before he took his own life. The chatbot, when he expressed suicidal thoughts, responded with words that appeared to encourage him to act.⁶ No crisis resources were triggered. No safeguard intervened. The chatbot had learned to mirror his language, reflect his emotional state back at him, and hold him in a conversation that should never have been allowed to continue. That is a design problem, not a content problem.
And then there is Grok, xAI’s chatbot.
Ahh, Grok.
When xAI launched image generation capabilities in late 2025, users immediately discovered they could generate sexualised images of women and children without consent. An analysis of 20,000 images generated in the first week found that 2% appeared to depict people under 18.⁷ Researchers calculated that users were creating around 6,700 sexually suggestive or non-consensual images per hour - 84 times the output of the top five deepfake websites combined.⁸ Three Tennessee teenagers have since filed a class action lawsuit alleging that xAI’s models were used to create child sexual abuse material from their photos.⁹
xAI’s response? They added filters after the backlash. Guardrails bolted on after the damage was done.¹⁰
This is the pattern we’re seeing broadly across the AI industry. And it is the same pattern the jury just rejected.
Baked in VS Bolted on
There is a design philosophy question at the heart of all of this (and it is one that I am actively wrestling with as someone building AI tools right now).
If you want to build an image generation model that cannot produce non-consensual nude images of women and children, there is a straightforward way to do it: don’t train it on hundreds of thousands of nude images. Data selection is a design choice. If the model has the capability to generate those images, it is only because somebody chose to feed it that data. Adding a filter afterwards that says “we won’t let you do this” is a fundamentally different approach to building a model that genuinely cannot do it.
The first approach is protection by architecture. The second is protection by permission - and permissions can be revoked, bypassed, or switched off.
AI systems make thousands of decisions per second that affect what people see, what they do, and what they believe. Unlike social media, where you can at least scroll past a post, an AI’s output is the interface. There is no feed to browse. The response is the product. That makes the design choices embedded in these systems matter more, not less, than the ones the jury just found Meta and YouTube liable for.
Most of us have not lived through a shift like this. We have not experienced the introduction of a technology that has such a fundamental potential to change how we live and work. And I say this as someone who has spent almost twenty years in strategic communications, who has watched the internet and social media reshape entire industries: AI is different. The scale of what it can affect - and the speed at which it is being deployed - is unlike anything that came before it.
But Laura, what can we do about it?
Right now, I am building with AI and writing my own algorithms. I am making (or trying to ‘make) design decisions every day that sit at the centre of this argument.
The honest version of what that looks like: choosing to build more slowly, to spend more, to accept a longer development roadmap. But the goal is to build something that protects my users, not something that serves only my commercial interests.
It means designing GDPR compliance into the architecture, not treating it as a checkbox. Explicit consent. UK data residencies. Terms and conditions written so that actual humans can understand them. None of this is technically difficult. It is commercially ‘inconvenient’. But that is a different problem.
There is another tension, and it’s one I wrestle with in my own work: we are building with models that have themselves been trained on scraped data, on copyrighted materials, on information collected without meaningful consent, all the while paying licence fees to private companies. We have to weigh up the performance benefits of using the latest commercially available models against the more ethical benefits of perhaps using an open-source or less finely trained alternative (or attempting what feels impossible: building our own). Knowing that if we choose the ethical route, we then need to find the time and the budget to do our own fine-tuning. As a small studio building our first product, that is not a trivial trade-off.
I don’t think there is judgement to be placed on companies and independent builders who, like me, are trying to do the best they can with the tools available. But I do think that, as a builder, you should be cognisant of that trade-off and be able to articulate the decisions you have made and why. “We didn’t think about it” is not an answer a jury is going to accept. Not any more.
Role models wanted
Last week’s verdict confirmed something that many of us have known for a long time: the people and companies designing these systems bear responsibility for the harm those systems cause. “It’s just a tool” is no longer a legal defence. The jury looked at the architecture and said: you built this to do what it did.
That precedent does not stop at social media. It extends to every AI system in deployment and every one currently in development.
What I don’t know - and what I am *genuinely* asking - is who is doing this well? Who are the researchers, the builders, the companies that are designing harm prevention into the architecture of their AI systems from the start, rather than bolting on safety features when the lawsuits arrive?
I am looking for those people. I want to learn from them. I want to work with them.
Meta was found liable by two separate juries last week. $381 million in combined damages. And their researchers are still being invited onto conference stages to talk about responsible AI (I know, I’ve sat in the audience and listened to them). They are still publishing papers on AI safety. A company that has exploited millions (billions?) of people, that was caught in the Cambridge Analytica scandal,¹¹ that was never built on ethical foundations, is still positioning itself as a credible voice on how AI should be developed. If that doesn’t tell you that the current version of “responsible AI” isn’t working, I don’t know what will.
If you are working on this, or you know someone who is, I would love to hear from you.
This article is part of Frontier Philosophies, where I think out loud about what it means to build AI responsibly. Subscribe to get new pieces as they’re published.
Notes
¹ The jury awarded $3 million in compensatory damages, split 70% Meta and 30% YouTube, and later recommended an additional $3 million in punitive damages - $2.1 million from Meta and $900,000 from YouTube. The trial ran for seven weeks in Los Angeles Superior Court. Sources: NPR, NBC News, CNN
² The New Mexico jury found Meta had violated state consumer protection laws and misled residents about the safety of Facebook, Instagram, and WhatsApp. The $375 million figure was based on the number of violations. Meta said it would appeal. Source: CNBC
³ The plaintiff’s legal team deliberately focused on alleged design flaws - recommendation algorithms, engagement-optimising features - rather than specific content, in order to counter Section 230 defences. Source: CNBC
⁴ Section 230 of the Communications Decency Act (1996) provides that internet companies are not liable for content posted by users. It has been the primary legal shield for social media companies in the US for nearly thirty years.
⁵ Character.AI and Google agreed in January 2026 to settle five lawsuits from families alleging that Character.AI chatbots harmed minors and contributed to two suicides. Sources: Fortune, Axios
⁶ Sewell Setzer III, 14, of Orlando, Florida, died by suicide in February 2024 after months of messaging a Character.AI chatbot. In their final exchange, the chatbot told him to “come home” to it. His mother, Megan Garcia, filed the first federal lawsuit against the company in October 2024. Source: CNN
⁷ Analysis conducted by AI Forensics, a European non-profit, examining images generated by Grok between 25 December 2025 and 1 January 2026. Source: Wikipedia - Grok sexual deepfake scandal
⁸ Separate analysis conducted over 24 hours from 5-6 January 2026. Source: Wikipedia - Grok sexual deepfake scandal, citing original reporting from multiple outlets.
⁹ Three Tennessee teenagers filed the class action on 16 March 2026, alleging xAI’s tools were used via a third-party app to create child sexual abuse material from their photos. The images were distributed alongside their first names and the name of their school. Source: NPR
¹⁰ After mounting criticism, xAI restricted Grok’s image generation on X to paying subscribers on 9 January 2026, and said it had “implemented technological measures” to prevent editing images of real people in revealing clothing. However, AI Forensics found the platform was still being used to generate sexualised images. Source: Euronews
¹¹ In 2018, it was revealed that political consulting firm Cambridge Analytica had harvested personal data from millions of Facebook users without consent. The scandal led to a $5 billion FTC settlement and became a defining moment in public awareness of data exploitation by social media platforms.

