chat loading...
Skip to Main Content

Generative AI

Appropriate Use

Always talk with your professor and review the relevant policies (e.g., syllabus statements, honor code, publisher policies) before using generative AI tools or features in your work.


Make sure you understand whether and how you're allowed to use generative AI, including at which stages of your research and how to cite or indicate your use.

Be intentional about using generative AI. This means:

  • understanding how the tool or feature works
  • considering how perspectives, experiences, and communities are represented (AI is not “neutral” and does not “contain all knowledge”)
  • verifying any information you get from generative AI
  • considering how generative AI use may affect your learning and research skill development

You are responsible for your own work, and should be able to explain the decisions you make about your process.

Asking yourself questions about values, benefits, and costs to use can help clarify your reasons.

Privacy, security, accessibility

Generative AI tools vary in terms of privacy, security and accessibility. Before using any tool, make sure you understand how it handles your data and any risks involved.

Most generative AI tools and platforms keep your chat/prompt history and many use your data to train their models. Your data (including uploads) may also be reviewed for quality control or abuse prevention.

  • You may be able to opt-out of data collection - check policies and data settings for specific instructions.

Protecting Privacy & Intellectual Property

It's best not to share anything with a generative AI tool that you don’t want to become public, or that you don’t want used in future generative AI outputs.

Don’t share private, sensitive, or personally identifiable information (PII)—about you or others, which includes lecture notes, slides, or audio from your faculty—and consult College guidelines prior to use.

Generative AI Research Tools

Some generative AI tools can search the open web or specific databases like Semantic Scholar. This allows generative AI models to combine generated outputs with external sources, providing links to “citations.” These are different from citations in research papers. The AI tool does not “read” papers like a person—it matches or extracts sections of text based on similarity in meaning to your input.

Limitations

  • AI tools may hallucinate, or generate false information. You may follow a link “cited” in the response and discover that the source does not contain that information at all.

  • AI tools may misrepresent information in sources, by oversimplifying complex topics, missing key context, or failing to register tone (ex: satire).

  • AI tools do not consider source authority and may refer to inappropriate sources for your project.

  • AI tools may miss sources that would be relevant to your project, because they are in a database the tool cannot access.

  • AI tools may overgeneralize, or provide conclusions unsupported by appropriate evidence.

It is important to directly check the sources referenced in any AI-generated summary. You should also consider whether the database (if known) is broad enough to contain all of the information you may need, or if you should supplement with specific library-database searches in your field of interest.

Evaluating Generative AI Outputs

Generative AI is a tool, not a source — it’s built on prediction, not evaluating or creating information as a human author does. You should always directly evaluate the sources provided by a generative AI tool yourself.

 

Keep a critical perspective when evaluating the outputs of any generative AI tool. Consider the following aspects:

Bias

Generative AI has been proven to reproduce and amplify social biases present in the datasets it was trained on. If the datasets underrepresent or exclude certain communities of knowledge, practices, or languages, the model may misrepresent - or fail to represent - these communities and cultures. It may also reproduce harmful social stereotypes or associations in outputs. Carefully consider which perspectives and communities are being represented in the model's outputs.

Accuracy

Generative AI tools may hallucinate or confabulate, which means they produce inaccurate information. Despite model improvements, this problem may be impossible to totally eliminate. Since AI outputs often seem coherent and persuasive, it can be difficult to recognize inaccurate information if you don’t have background knowledge in the subject area. Generative AI might also fail to represent context and nuance when summarizing or linking information from multiple sources, creating summaries with inaccuracies and misattributions.

Transparency

Generative AI tools often aren’t transparent about their datasets, how they process or retrieve sources to generate responses (simplified for end users), or how they process user inputs. Tools have “system prompts” embedded in every input that end users do not see. Consider whether you understand well enough how a tool is working to determine if it’s a good match for your goals, process, and values.


If you’re unsure of how to evaluate generative AI outputs in your research process, research librarians can help!

Citing Generative AI

Always check with your professor before using generative AI for coursework: faculty may have different policies across types of assignments or in different fields of study.

When to cite

You should indicate when you've used an AI tool in any of the following processes:

  • gathering information
  • writing text
  • editing text
  • synthesizing ideas
  • cleaning or manipulating data

Depending on your citation style, you may use a citation, a note, or an in-text acknowledgement to indicate AI use.

Sources cited by AI: When an AI tool mentions a source, you should always check that source yourself and cite it directly. Generative AI tools can create fake citations and also misrepresent the information within real sources.


Citation guidance by style guides


Information to save

When using generative AI tools, you'll want to capture all the information you might need for citation. This includes:

  • name and version of the tool (ex: ChatGPT 4o)
  • time and date of usage
  • your prompt
  • output
  • any follow up prompts and outputs
  • name of the user

Saving your prompts and the outputs is especially important because generative AI tools can provide different outputs in response to the same prompts.

Zotero

Zotero does not have an item type for "generative AI." Currently the best practice is to use the "Software" item type and experiment with fields according to your style guide requirements.


Sources: Adapted from material from MIT Libraries Citing AI Tools guide, Brown University Library Citation and Attribution with AI Tools guide, Harvard Library Citing Generative AI guide