Make sure you understand whether and how you're allowed to use generative AI, including at which stages of your research and how to cite or indicate your use.
Be intentional about using generative AI. This means:
You are responsible for your own work, and should be able to explain the decisions you make about your process.
Asking yourself questions about values, benefits, and costs to use can help clarify your reasons.
Generative AI tools vary in terms of privacy, security and accessibility. Before using any tool, make sure you understand how it handles your data and any risks involved.
Most generative AI tools and platforms keep your chat/prompt history and many use your data to train their models. Your data (including uploads) may also be reviewed for quality control or abuse prevention.
It's best not to share anything with a generative AI tool that you don’t want to become public, or that you don’t want used in future generative AI outputs.
Don’t share private, sensitive, or personally identifiable information (PII)—about you or others, which includes lecture notes, slides, or audio from your faculty—and consult College guidelines prior to use.
Some generative AI tools can search the open web or specific databases like Semantic Scholar. This allows generative AI models to combine generated outputs with external sources, providing links to “citations.” These are different from citations in research papers. The AI tool does not “read” papers like a person—it matches or extracts sections of text based on similarity in meaning to your input.
AI tools may hallucinate, or generate false information. You may follow a link “cited” in the response and discover that the source does not contain that information at all.
AI tools may misrepresent information in sources, by oversimplifying complex topics, missing key context, or failing to register tone (ex: satire).
AI tools do not consider source authority and may refer to inappropriate sources for your project.
AI tools may miss sources that would be relevant to your project, because they are in a database the tool cannot access.
AI tools may overgeneralize, or provide conclusions unsupported by appropriate evidence.
It is important to directly check the sources referenced in any AI-generated summary. You should also consider whether the database (if known) is broad enough to contain all of the information you may need, or if you should supplement with specific library-database searches in your field of interest.
Generative AI is a tool, not a source — it’s built on prediction, not evaluating or creating information as a human author does. You should always directly evaluate the sources provided by a generative AI tool yourself.
Keep a critical perspective when evaluating the outputs of any generative AI tool. Consider the following aspects:
Generative AI has been proven to reproduce and amplify social biases present in the datasets it was trained on. If the datasets underrepresent or exclude certain communities of knowledge, practices, or languages, the model may misrepresent - or fail to represent - these communities and cultures. It may also reproduce harmful social stereotypes or associations in outputs. Carefully consider which perspectives and communities are being represented in the model's outputs.
Generative AI tools may hallucinate or confabulate, which means they produce inaccurate information. Despite model improvements, this problem may be impossible to totally eliminate. Since AI outputs often seem coherent and persuasive, it can be difficult to recognize inaccurate information if you don’t have background knowledge in the subject area. Generative AI might also fail to represent context and nuance when summarizing or linking information from multiple sources, creating summaries with inaccuracies and misattributions.
Generative AI tools often aren’t transparent about their datasets, how they process or retrieve sources to generate responses (simplified for end users), or how they process user inputs. Tools have “system prompts” embedded in every input that end users do not see. Consider whether you understand well enough how a tool is working to determine if it’s a good match for your goals, process, and values.
If you’re unsure of how to evaluate generative AI outputs in your research process, research librarians can help!
You should indicate when you've used an AI tool in any of the following processes:
Depending on your citation style, you may use a citation, a note, or an in-text acknowledgement to indicate AI use.
Sources cited by AI: When an AI tool mentions a source, you should always check that source yourself and cite it directly. Generative AI tools can create fake citations and also misrepresent the information within real sources.
When using generative AI tools, you'll want to capture all the information you might need for citation. This includes:
Saving your prompts and the outputs is especially important because generative AI tools can provide different outputs in response to the same prompts.
Zotero does not have an item type for "generative AI." Currently the best practice is to use the "Software" item type and experiment with fields according to your style guide requirements.
Sources: Adapted from material from MIT Libraries Citing AI Tools guide, Brown University Library Citation and Attribution with AI Tools guide, Harvard Library Citing Generative AI guide