Skip to Main Content Research Guides | Library | Amherst College

Generative AI

Identifying Value and Costs

It can be helpful to clarify your reasons and goals for using generative AI, while being mindful of potential costs or harms.

You can ask some or all of the following questions:

  • Why do I want to use gen AI for this task, process, or project?
  • What function specifically do I want this tool to perform? Ex: outline, critique, brainstorm, explore, refine
  • How well does this use align with my personal values?
  • What do I expect the benefits to be?
  • What might be some costs or harms involved, to myself or others?
  • How might I mitigate these costs or harms?
  • How am I protecting the privacy and intellectual work of myself and others?
  • Am I comfortable with my opinions or approach being influenced by gen AI?
  • How might I verify information I am getting from gen AI?

It can also help to learn more about the datasets, development, and design goals of these tools in order to understand whether they will meet your needs. This can be challenging, as many projects lack transparency around these issues.

Verifying Information

Generative AI may create plausible-seeming outputs that are entirely inaccurate, or include factual errors in responses. There is no obvious indication that a tool or platform has made a mistake, and systems vary in their responses to users pointing out errors.

In AI research tools, this most commonly happens with "fake citations," or citations to works that do not exist. AI tools may also produce inaccurate summaries or analyses of texts, by including information not in the original work or excluding or misrepresenting information.

You should have a way to cross-check any information you get from a generative AI tool, using external sources. Don't rely on sources mentioned by a generative AI tool without directly accessing those sources yourself, and verify summaries by consulting the original source(s).


Generative AI can be used to create misinformation and disinformation at scale. It can be difficult to determine whether content has been created or manipulated by people using generative AI. Practice critical information literacy processes like SIFT:
Stop
Investigate the Source
Find better coverage
Trace claims, quotes, media back to the original context

SIFT was created by Mike Caulfield. Clark College Librarians have created a LibGuide illustrating these moves - Evaluating Information: SIFT.

Protecting Privacy

Many generative AI tools collect your chat / prompt history by default and use these to further train their models. Your input may also be reviewed by employees at the company for quality control or abuse prevention.

Data privacy and security practices vary widely across tools, and there may be different protections for enterprise-level models. If you have questions about a tool Amherst subscribes to, you can ask our campus IT department.

As a rule of thumb, it's best to not share anything with a generative AI tool that you do not wish to become public, or that you do not want to see "remixed" by a generative AI application.

AI and Authorship

There are also unresolved questions related to authorship and accountability when using generative AI tools.

Some basic guidelines for AI-augmented authorship are to be:

  • Intentional - consider the contexts of your work and how you plan to share your work. Check whether specific journals or organizations you plan to engage with have policies related to generative AI and authorship.
  • Transparent - disclose your use of generative AI in your work.
  • Specific - keep track of your own prompts, their results, and how you've modified your process. This can help you be reflective about your work and may also help others who seek to engage with your works in various ways.

While specific policies will vary, Boston College Libraries has summarized some key elements of most AI publishing policies:

  • The use of generative AI must be disclosed. The appropriate place for this disclosure varies by journal and by the specific use of AI, but it would likely be through a citation or acknowledgment.
  • Generative AI cannot be a credited author. 
  • Authors are responsible for any errors or plagiarism from generative AI.

When writing an article or conducting peer review, keep in mind that uploading the article to a generative AI website exposes the text to third parties, risking confidentiality when AI platforms utilize user data in training and enhancing their models.

Design Protocols & Frameworks

Communities and organizations are working to develop design protocols, community agreements, and other frameworks to support ethical development and use of generative AI (and AI generally). This is particularly important for historically marginalized groups, given the risks of bias, misrepresentation, erasure, extraction, and exploitation in AI projects.

Critical Questions

Asking critical questions about generative AI can uncover harms and illuminate power dynamics in the design, use, and assessment of tools and platforms. You can start with technoskeptical questions like the following:

  1. What does society give up for the benefits of the technology?
  2. Who is harmed and who benefits from the technology?
  3. What does the technology need?
  4. What are the unintended or unexpected changes caused by the technology?
  5. Why is it difficult to imagine our world without the technology?

You can also ask questions about the design and development process, including the specific decisions that designers have made when selecting data sets, training models, and evaluating outputs.

Mozilla Foundation provides an overview of this process as well as critical questions:

The human decisions that shape generative AI: Who is accountable for what? Mozilla Foundation Insights Blog. Aug. 2, 2023. By Stefan Baack.