With the release of ChatGPT in November and the rapid subsequent explosion of generative AI, we currently are in a period that is filled with lots of hype and more questions than answers, two partners with Seyfarth Shaw noted in a June 13 webinar for members of America's Newspapers.
During the webinar on the "Impacts of Generative AI on the Newspaper Industry," J. Stephen Poor and Puya Partow-Navid gave members an overview of the basics, as well as a look at some of the risks and benefits of generative AI, and emerging legal issues.
Poor said that it's important to understand that generative AI, the ability of software/algorithms to take existing information and generate new content without the need for human intervention, is not a search technology. It's programming that allows a machine, for example, to predict and produce words based on previous words.
Within newsrooms, Poor said that might include first drafts of an article, headline generation (where you input an article you've written and ask it to produce a headline), a summary of the article, transcriptions, etc.
But, this comes with some cautions, he said. Some models have only been trained on datasets up to 2021, and you may not always know what data it is using.
There also can be errors (hallucinations) in the output, based, in part, on inaccurate training data, gaps in training data and/or biases. "Anyone who relies on these large language models to produce 100 percent accurate information is making a terrible mistake," he said.
Partow-Navid noted that there are several pending copyright infringement cases that may set precedents for what's going to happen with AI in the future. For example, in one case, a group of artists is suing AI companies claiming that generative AI is using their copyrighted images without obtaining consent to allow users to create works that are in the style of specific artists.
Another case filed by Getty Images centers around the use of its pool of copyrighted and trademarked images that it says are being used to train another company's image generator and then creating images based on Getty Images' work. In some cases, Partow-Navid said the Getty Images logo that is seen in its watermark are even being added — making it confusing to consumers as to where the images originated from.
He said that while the U.S. Copyright Office has shed some light on the question of who owns data, there are many other areas that have not been tested. For example, the Copyright Office has said that there needs to be some human involvement with generative AI that's based on text in order for it to be copyrightable.
But, he said there are many other areas that are unclear. For example:
Partow-Navid said New York City has a law that restricts employers and employment agencies from utilizing specific AI tools for hiring or promotion purposes. These tools can only be used if they have undergone a bias audit within the past year, the audit results are publicly accessible, and the necessary notification obligations to employees or job candidates have been fulfilled. California is also working on some legislation in this area.
There also has been some talk, he said, of licensing AI models or requiring AI models to put a watermark on AI-generated content. But, nothing has been settled in this area yet.
Your AI strategy
Poor advised members of America's Newspapers to have a strategy for the use of AI that is communicated to members of their team, as well as to consumers.
Whether this is a formal written policy or memorandum, he said it is important to ensure that your reporters, for example, understand the risks and benefits of using AI so they use it appropriately and consistent with whatever guidance the newspaper has settled on.
No comments on this item Please log in to comment by clicking here