Could End Users of Generative AI Systems Face Copyright Infringem


Generative AI tools, like OpenAI’s ChatGPT, have moved from experimental novelty to operational necessity at a remarkable pace. These systems are becoming increasingly embedded into workflows across all industries. However, as AI’s prevalence—and its users’ productivity—continues to scale, so too does collective apprehension over a critical question. If (or perhaps more accurately, when) AI generates output that infringes a third party’s exclusive rights to their original work, can the end user—the one who prompted the AI to create said output—face liability for copyright infringement?

While generative AI is relatively new, the answer to the foregoing question might not be. Courts have yet to directly adjudicate this issue, but traditional U.S. copyright law principles provide us a usable framework to forecast end users’ potential liability. We know that the author of an original work possesses exclusive rights thereto, including the right to reproduce, distribute, publish, and create derivative works based on it.1 We also know that generative AI tools are trained using tremendous amounts of data, many times including entire copies of copyrighted materials. Assuming this use of copyrighted training data to generate outputs infringes one or more of these exclusive rights (a matter that itself is an actively debated question)2, the end user might indeed be liable, as the one who engaged in the volitional conduct giving rise to such infringement.

Direct copyright infringement cases involving automated systems have historically focused not on whether there was conscious copying but whether there was “some element of volition or causation.”3 In Religious Technology Center v. Netcom On-Line Communication Services, Inc., the court held that an internet service provider whose system automatically stored infringing material posted by a third-party user was not a direct infringer, reasoning that the provider’s automated processes lacked the requisite volitional conduct.4 The Second Circuit reinforced this principle in Cartoon Network LP v. CSC Holdings, Inc., saying that it “seems clear” that “the person who actually presses the button . . . supplies the necessary element of volition, not the person who manufactures, maintains, or . . . owns the machine.”5 This volitional element of direct copyright infringement has been reinforced more recently across various circuits.6

While the above case law focused on whether the automated system at issue possessed requisite volition, we can flip the perspective to extrapolate the risk of liability facing end users. On one hand, contrary to the analogy used in Netcom and Cartoon Network, the process behind generative AI is hardly as simple as pressing the “copy” button on a photocopier.7 AI produces novel outputs that bear unpredictable relationships with its training data. As the U.S. Copyright Office itself has observed in the context of AI-generated works’ copyrightability, an end user who prompts AI is more analogous to a person who hires an artist and provides general instructions for a commissioned work than to one who operates a mechanical reproduction device.8 The end user may shape the direction of the output, but they cannot entirely dictate its precise contents. If end users lack sufficient control over original outputs to claim copyright protection in AI-generated works, it may follow that they also lack sufficient volition in creating infringing outputs to be liable for direct copyright infringement. That said, where an end user’s inputs are intentionally and specifically designed to elicit expression based on a copyrighted work, a copyright holder may plausibly argue that the end user has supplied the requisite volitional conduct.

Whether there is sufficient volition giving rise to direct infringement will be a key issue in copyright liability claims against end users, but three other issues are worth briefly noting here. First, an end user might face secondary copyright liability, either contributory or vicarious. However, this theory faces several obstacles: (1) it first requires the finding that the AI provider is a direct infringer (a far from settled matter)9; (2) contributory infringement requires knowingly inducing, causing, or materially contributing to the direct infringer’s conduct,10 but the end user sits downstream of the AI provider, receiving outputs rather than providing the “site and facilities” typically found actionable in such cases;11 and (3) vicarious liability requires supervisory control over the direct infringer and a direct financial interest in the infringing activity,12 but while the end user may direct outputs and use them for commercial gain, it’s a further reach to say they have any authority over the AI’s model or process.

Second, other copyright doctrines—like the need for actual copying or the idea-expression distinction—may foreclose a claim before ever getting to the above questions. While copyright infringement is a strict liability tort,13 the plaintiff must still prove that the defendant actually copied the original work.14 An end user might argue that they had no knowledge of the copyrighted training data and therefore could never have in fact copied it. However, “a defendant may be liable where he copied . . . from a third party, who in turn had copied from the plaintiff.”15 Applied here, an end user prompting output effectively derived from copyrighted training data may sufficiently establish actual copying. Separately, the idea-expression distinction establishes that copyright protects creative expression, not the underlying idea of that expression.16 Drawing that line with respect to AI output (i.e., when it pulls generally on the themes, structure, or style of a copyrighted work versus substantially reproducing its expression) presents a separate issue.

Third, even assuming an end user meets all elements of infringement, the affirmative defense of fair use is ever-looming. This doctrine permits otherwise infringing uses of copyrighted material where certain factors weigh in favor of such use, including the purpose of the use, the nature of the copyrighted work, the amount used, and the effect on the copyrighted work’s market.17 Whether AI’s use of copyrighted training data qualifies as fair use is being actively litigated;18 whether an end user’s prompting and use of outputs would independently qualify for such a defense is a separate question.

Attorneys advising clients who rely on generative AI in their workflows should treat AI outputs with the same caution one would apply to any work product of uncertain provenance. This includes (1) implementing review processes to screen outputs for potential similarity to existing copyrighted works before publication or commercial use; (2) retaining records of prompts and the context in which outputs were generated, which may be critical to establishing (or negating) volitional conduct; (3) avoiding prompts that specifically reference or seek to replicate the expression of known copyrighted works; and (4) staying attuned to the terms of service of AI platforms (which in many cases disclaim liability and shift risk to the end user contractually, regardless of how courts ultimately resolve the copyright question).19 Generative AI is a continually emerging technology, and copyright litigation focused directly on it is even further in its infancy, so the ultimate landscape could shift meaningfully over the next several years. In the meantime, the safest assumption is to diligently avoid engaging or using AI-generated work product that resembles another’s copyrighted materials without authorization.


[1] 17 U.S.C. § 106.

[2] See generally U.S. Copyright Off., Copyright and Artificial Intelligence, Part 3: Generative AI Training (2025).

[3] Religious Tech. Ctr. v. Netcom On-Line Commc’n Serv., Inc., 907 F. Supp. 1361, 1370 (N.D. Cal. 1995).

[4] Id.

[5] 536 F.3d 121, 131 (2d Cir. 2008).

[6] See, e.g., Perfect 10, Inc. v. Giganews, Inc., 847 F.3d 657 (9th Cir. 2017); BWP Media USA Inc. v. Polyvore, Inc., 922 F.3d 42 (2d Cir. 2019) (per curiam); see also Concord Music Grp., Inc. v. X Corp., No. 23-CV-00606, 2024 WL 945325 (M.D. Tenn. Mar. 5, 2024).

[7] See Netcom, 907 F. Supp. at 1369; Cartoon Network, 536 F.3d at 131.

[8] See U.S. Copyright Off., Copyright and Artificial Intelligence, Part 2: Copyrightability 18 (2025).

[9] See generally U.S. Copyright Off., Copyright and Artificial Intelligence, Part 3: Generative AI Training (2025); In re OpenAI, Inc., Copyright Infringement Litig., 776 F. Supp. 3d 1352, 1353 (J.P.M.L. 2025).

[10] See Gershwin Publ’g Corp. v. Columbia Artists Mgmt., Inc., 443 F.2d 1159, 1162 (2d Cir. 1971).

[11] See Fonovisa, Inc. v. Cherry Auction, Inc., 76 F.3d 259 (9th Cir. 1996).

[12] See Gershwin, 443 F.2d at 1162.

[13] See 17 U.S.C. § 501(a).

[14] Restatement of the Law, Copyright § 7.03 (Am. L. Inst., Tentative Draft No. 4, 2023).

[15] Pye v. Mitchell, 574 F.2d 476, 481 (9th Cir. 1978).

[16] 17 U.S.C. § 102(b).

[17] Id. § 107.

[18] See, e.g., In re OpenAI, Inc., Copyright Infringement Litig., 776 F. Supp. 3d 1352, 1353 (J.P.M.L. 2025); see also Christopher J. Sullivan, Fair Use After the Supreme Court’s Warhol Decision—and What It Could Mean for Generative AI, Keynotes (May 22, 2023).

[19] For example, OpenAI’s Terms of Use prohibit using ChatGPT “in a way that infringes, misappropriates or violates anyone’s rights” and requires that any user who is a “business or organization . . . indemnify and hold harmless [OpenAI] from and against any costs, losses, liabilities, and expenses (including attorneys’ fees) from third party claims arising out of or relating to [use of ChatGPT or its outputs] or any violation of these Terms.” Terms of Use, OpenAI (last visited March 5, 2026).



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *