Katherine Klosek, Association of Research Libraries, Link (CC-BY)

Generative artificial intelligence (AI) is a technology that can help authors and other creators brainstorm ideas, edit original works, conduct research, and more. But rather than rely on existing law to address questions like whether ingesting works to train AI models is fair use, or if works including AI-generated content are eligible for copyright protection, some policymakers in the US seem determined to develop new legal frameworks or licensing regimes. This month, the Library Copyright Alliance (LCA) issued principles to guide policymakers in their conversations around copyright law and AI. LCA is the voice of the library community on copyright policy; its members—the American Library Association (ALA) and the Association of Research Libraries (ARL)—represent over 300,000 information professionals and thousands of libraries.

The LCA principles hold that US copyright law is fully capable of addressing questions about AI-generated outputs. For instance, in March of this year the US Copyright Office issued registration guidance reiterating the long-standing requirement that a work be authored by a human in order to receive copyright protection. In a recent webinar, the Copyright Office clarified that registrants should disclose AI-generated elements of a work using the same process as other unclaimable elements (like works in the public domain or previously registered works). Applicants, however, are not required to disclose when works contain a de minimis amount of AI-contributed authorship–for example, when AI is used for editing or blurring an original work. To test whether AI contribution to a work is de minimis, the office encouraged potential applicants to consider whether that element of the work would be eligible for registration if it was produced by a human author.

Concerns about AI ingesting an original copyrighted work and producing an output that is substantially similar to the original work can also be addressed by existing law; a copyright owner of an original work can sue both the AI provider and the user who prompted an AI to produce a substantially similar work.

On the input side, ingesting copyrighted works to create large language models or other AI training databases is an established fair use, in line with the precedent established in Authors Guild v. HathiTrust and upheld in Authors Guild v. Google. In those cases, the US Court of Appeals for the Second Circuit held that ingesting vast quantities of works for the purpose of making non-expressive uses of those works, such as text and data mining, was a fair use. Of course, copying and displaying unprotected elements of works, such as facts, is not infringement, per Feist Publications v. Rural Telephone Service Company.

The LCA principles were distilled from points that LCA made during our participation in the Copyright Office listening session on generative AI and copyright as it relates to literary works. On July 5, LCA submitted the principles to the US Office of Science and Technology Policy (OSTP) in response to its request for information to update US national priorities and future actions on AI. LCA will continue to engage with the Copyright Office initiative on copyright and AI, with the Biden-Harris administration on its development of a National Artificial Intelligence (AI) Strategy, and with other federal policymakers to ensure that legislation and regulation do not stifle the power of AI to express creativity, and that creators may use AI in furtherance of the objectives of the copyright system. These principles can also guide our participation in international coordination and setting of policy relating to AI and copyright.