Significant AI lawsuits of first impression are currently making their way through the U.S. federal court system. They will shape the legal landscape on issues of developer liability, copyright infringement, 1st Amendment free speech, employment discrimination, and wrongful death, to name a few. In this post, we will discuss two of these issues: copyright infringement by AI tools and copyright protection of AI-generated output. Future posts will address the others.
Can AI Output be Copyrighted? It Depends.
In its Copyright and AI Reports, the U.S. Copyright Office has determined that only original human authorship can be copyrighted: purely AI output is not eligible for copyright protection.
It is an open question as to how much human control of original expression in the output is required to trigger copyright protection. This will have to be determined on a case-by-case basis. But as a starting point, the U.S. Copyright Office has determined that prompts alone are not sufficient.
AI & Copyright Infringement
Human authors are entitled to copyright protection of their original work that is perceptible in AI outputs, including their creative selection, coordination, arrangement, and modifications. A number of cases over this issue are currently in suit. Scarlett Johansson, Tom Hanks, Taylor Swift and Game of Thrones’ creator George R.R. Martin each have brought lawsuits alleging infringement of their rights over their own name, image, likeness, voice, and/or copyrighted creative expressions. Most often, they involve generative AI falsely appearing to depict the individual endorsing a product.
Federal copyright protection cases are differentiating between predictive AI and generative AI, and we can expect different treatment to emerge. A case that is currently transforming the AI infringement landscape on this issue is Thomson Reuters v. ROSS Intelligence.
In Reuters, the owners of Westlaw sued Ross Intelligence for copyright infringement, alleging Ross's AI tool used predictive rather than generative AI. Westlaw says Ross's AI took Westlaw's copyrighted headlines, points of law, and case holdings. Ross's own competing for-profit AI research product was returning Westlaw's content as its own output. The court held this is not fair use.
Fair use as defined at 17 U.S.C. § 107 must consider at least four factors:
(1) the use’s purpose and character, whether commercial or nonprofit;
(2) the nature of the copyrighted work;
(3) how much of the work was used and how substantial a part it represented in the copyrighted work; and
(4) how did its use or misuse by a defendant affect the copyrighted work’s value or potential market.
Factors (1) and (4) are given the greatest weight. Courts are looking closely at whether the use is commercial in nature or not, and whether it is transformative of the original copyrighted work, meaning is it being used for the same or similar purposes.
In one of many more AI copyright cases, Concorde Music sued Anthropic for infringement of copyrighted song lyrics Anthropic used to train its AI tool, Claude. The questions in that case include how much of the original human work is perceptible in Claude's AI output.
Key in all these cases is whether or not the AI tool used copyrighted materials to learn and to generate — but did not present that material in its public output.
Cases involving copied computer code, for example, have consistently held that so-called intermediate copying, done only to create a compatible product, is a fair use.
Thus far, courts are rejecting this comparison between computer code and written words such as literary works, books and films, but the law on this topic is evolving.
Substantial similarity requires evaluating whether “the later work materially appropriates the copyrighted work."
https://www.ded.uscourts.gov/sites/ded/files/opinions/20-613_5.pdf
