Academic and trade publishers are increasingly licensing their authors’ works to external companies as content to train their AI models and develop AI features. Authors may have limited ability to opt-out of these deals. Read contracts carefully and inquire about generative AI terms and conditions.
A recent class-action lawsuit against Anthropic (Bartz v. Anthropic) has resulted in a $1.5 billion settlement in which the company has agreed to pay damages to authors. The company trained its models on multiple pirated collections of copyrighted works. Authors can search for their works in the settlement works list and file a claim by March 23, 2026.
Other companies are likely to have used pirated collections of copyrighted works to train their models. If courts find that the companies willfully infringed copyright, significant damages could be awarded to the plaintiffs.
Be aware that if you post your work online or on a social academic network (ex: ResearchGate, Academia.edu), your work will likely be scraped by generative AI models either as training data or as a “source” in outputs.
Publisher policies vary related to the use of generative AI, though most require disclosure and citation of AI use and prohibit AI from being credited as an author. Authors are fundamentally responsible for any errors or plagiarism resulting from the use of generative AI.
Some basic guidelines for AI-augmented authorship are to be:
Do not upload copyrighted, private, sensitive, or personally identifiable information (PII) to generative AI tools that are not licensed by the College. This includes "free" or "personal" accounts. You may be infringing copyright, violating data privacy, and/or putting your own work at risk.