The legal framework for AI is being built in real time, and a ruling in the Sarah Silverman case should give publishers pause

That an AI model was trained on copyrighted material does not make all of the model’s outputs a copyright violation


When the comedian Sarah Silverman sued Meta over its AI model LLaMA this summer, it was pretty big news. (And that is, of course, kind of the point. Silverman is actually one of three co-plaintiffs in the case, but not as many people will click a headline about "Kill City Blues" author Richard Kadrey or "Father Gaetano’s Puppet Catechism" author Christopher Golden.)

But it didn’t get as much attention last week when a federal judge dismissed most of it — and set a high bar to prove what remained.

To be clear: The legal framework for generative AI — large language models, or LLMs — is still very much TBD. But things aren’t looking great for the news companies dreaming of billions in new revenue from AI companies that have trained LLMs (in very small part) on their products. While elements of those models’ training will be further litigated, courts have thus far not looked favorably on the idea that what they produce is a copyright infringement.

Read more from Nieman Lab