One use of AI systems by a user involves inputting prompts, i.e. written instructions, into the system to generate output. Prompts alone, without more, “do not provide sufficient human control to make users of an AI system the authors of the output” be-cause prompts “function as instructions that convey unprotectible ideas.” Here, the Office used joint authorship as an analogy for assessing the protection to be afforded a user inputting prompts into an AI system. While a user of an AI system can input highly detailed directions (prompts) that contain their desired expressive elements, in the end the lack of control over the output renders the final result unprotectible because “[p]rompts do not appear to adequately determine the expressive elements produced, or control how the system translates them into an output.”
However, not all prompt-based outputs are unprotectible. In one example cited by the Office, a user entered a prompt along with a hand-drawn illustration (an “input”) into an AI system which produced an image in response. Be-cause the image generated incorporated copyrightable work of the user that was perceptible in the output, copyright in this type of AI-generated output would cover the perceptible human expression. The scope of protection afforded the output is analogous “to that in a derivative work.”
While the Office would not extend protection to a prompt-generated image, song, or text alone, it does find that “human authors should be able to claim copyright if they select, coordinate, and arrange AI-generated material in a creative way.” Examples of this include the registration of a comic book featuring human-selected AI generated images, and human-authored text. There is no doubt that the human authored text alone would be protectible whereas the AI-generated images (assuming it lacked a protectible input in their generation) would not. From a technology standpoint, tools that allow the user to “control the selection and placement of individual creative elements” may generate protectable outputs where the modifications, like a derivative work, “rise to the minimum standard of originality required under Feist”. Where the standard for originality falls with respect to AI generated outputs will evolve through legal precedent.
Finally, the Office addressed the question of whether new laws should be enacted to address AI-generated mate-rial. It concluded that “[t]he case has not been made for additional protection for AI-generated material beyond that is provided by existing law.” If the purpose of existing law is to provide incentive for creation, the Office did not suggest Congress pass new law(s) to incentivize generative developers beyond the existing “patent, copyright, and trade-secret protection for the machinery and software, as well as potential funding and first-mover advantages.” Here, the Office is more concerned with AI’s financial impacts on human authors and their outputs, “if a flood of easily and rapidly AI-generated content drowns out human-authored works in the marketplace, additional legal protection would undermine rather than advance the goals of the copyright system. The availability of vastly more works to choose from could actually make it harder to find inspiring or enlightening content.” Their analysis on this point, to this author, seems to not consider the role of agents, marketers, critics, and institutions in the promotion of human-authored works and the consumer’s discretion in selecting and promoting that which is “inspiring” or “enlightening.” However, perhaps future reports will look further into the market analysis behind such conclusions in assessing the to-be-considered issues of liability and licensing.