The New York Times sues OpenAI and Microsoft over AI use of copyrighted work

Posted

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, Dec. 27, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit, filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

Click here to read more.

Statement by News/Media Alliance:

The News/Media Alliance applauds The New York Times for filing its lawsuit asserting that Microsoft and OpenAI have violated the law by taking millions of the Times’ copyrighted works to use in their products.

Alliance President and CEO Danielle Coffey stated, “The New York Times’s complaint demonstrates the value of quality journalism to AI developers. These companies repurpose and monetize news content, competing with the very industry they are benefiting from. Quality journalism and GenAI can complement each other if approached collaboratively, but using journalism without permission or payment is unlawful, and certainly not fair use.”

In a White Paper released in October, an analysis commissioned by the Alliance found that there is heavy reliance on journalistic and creative content by AI training models along with verbatim text from articles found in AI outputs.  The findings support a legal claim, parallel to the same findings asserted in The New York Times’ complaint.

Coffey continued, “The value of quality journalism has been debated for years.  We are at a point where the question is not whether quality journalism should be compensated, rather a question of how much.  In the case of AI, copyright protected content used without authorization should be a priority in releasing these technologies to the public so that responsible innovation can live alongside responsible reporting.”