Skip to Main Content Research Guides | Library | Amherst College

Generative AI

Ask a Librarian

User Experience Strategist

Profile Photo
Kelly Dagan
Frost Library


This guide provides a basic overview of generative AI, including ethical issues and potential costs, in order to help you make informed choices about using AI tools.

GenAI tools are rapidly changing, with new information about applications, policies, and social impacts coming out daily. We've included dates for materials and will keep updating as often as possible.

What is Generative AI?

Broad definition:

Generative AI refers to AI models that can create new content, such as text, images, code, audio, or video.

Compared to other types of AI, generative AI can be understood as AI that "generates," rather than "discriminates." (1)  AI that "discriminates" is used in classification and prediction tasks, like image recognition.

Generative modeling can be defined as:

"a branch of machine learning that involves training a model to produce new data that is similar to a given dataset."  (2, emphasis added)

Applied definition

Here's a plain language definition from The New York Times:

"Generative A.I.: Technology that creates content — including text, images, video and computer code — by identifying patterns in large quantities of training data, and then creating original material that has similar characteristics. Examples include ChatGPT for text and DALL-E and Midjourney for images." (3, emphasis added)

How does it work?

Generative AI models generate content by "statistically analysing the distribution of words, pixels or other elements in the data it has ingested and identifying and repeating common patterns."(4)

  • For text generators, AI models predict the next likely word in a sequence to generate fluent, plausible-seeming text. They do not "understand" prompts or text in the way another human would.
  • AI models are trained on massive datasets often scraped from the internet, which can include social media posts, personal websites, and pirated content. Their training includes annotation and testing by humans to remove offensive or harmful outputs.
  • We often don't know exactly what data AI models have been trained on, which means that in most cases, users have no way to know what information an AI tool had access to during development.
  • Generative AI often produces inaccurate and biased information, as a result of their datasets, development process, and a lack of true "understanding" of the material. They are fundamentally statistical models, not thinking machines. However, because their outputs appear to mimic human capabilities like language, people may place high levels of trust in these tools without recognizing their limitations.

Generative and discriminative AI may also be used in combination, in order to refine and iteratively improve outputs, in an approach called Generative Adversarial Networks (GANs).


1. What is Gen AI and How is it Impacting Education? A Presentation by Scott Alfeld, Assistant Professor of Computer Science, Amherst College. Sept 13, 2023. 

2. Foster, David. Generative Deep Learning : Teaching Machines to Paint, Write, Compose, and Play. Second edition., O’Reilly Media, Incorporated, 2023.

3. Pasick, Adam. “Artificial Intelligence Glossary: Neural Networks and Other Terms Explained.” The New York Times, 27 Mar. 2023.,

4. Guidance for generative AI in education and research. UNESCO. 2023

What are some types of generative AI?

Text AI / Chatbots

Users provide text or voice prompts to these tools, which are designed to provide fluent, conversational responses. Popular examples include ChatGPT, Google Bard, Microsoft Bing Chat.

Image AI

Users provide descriptions of images or image effects, with options to modify the output in various ways. Popular examples include DALL-E, Midjourney, Stable Diffusion, and Firefly.

Code AI

Users can provide specifications to generate code, or review and check code along specific criteria.

Video, audio AI

Users can provide prompts to generate videos or create video effects. Users can provide text to render in audio, or apply effects to audio, including rendering in different "voices."

These tools are continuing to develop, with new integrations in text-based tools such as voice prompting and image recognition. There are also smaller scale models being developed as open source projects.

It's important to review the privacy, security, safety, and ethical aspects of any generative AI platform you're using.

What is prompt engineering?

While generative AI tools may seem intuitive, it can be difficult to get them to produce exactly the output you want. They are often sensitive to slight wording changes in a prompt (ex: "fair" instead of "just"), and may produce different outputs to the same prompt over time.

Prompt-engineering refers to techniques to get generative AI to produce what a user wants in an output. Some general recommendations include:

  • using simple and clear language
  • including examples of the desired type of response or format for outputs
  • including context in the prompt
  • refining and iterating in response to outputs
  • avoiding prompts that may generate harmful or inappropriate content

Guidance for generative AI in education and research. UNESCO. 2023

You can try crafting prompts that include context and specificity by using the PREP framework.

  • Prompt - the specific output desired (ex: an email, an outline, etc)
  • Role - context about the role of the person using the output (ex: a student, a faculty member, etc.)
  • Explicit Instructions - details about the desired output, including examples
  • Parameters - details about any requirements for length, tone, etc.

The AI Classroom. 2023. Dan Fitzpatrick, Amanda Fox, Brad Weinstein.

Learn Prompting - an open source curriculum with levels from beginner to advanced to help you learn how to communicate with AI systems

Guidelines for effective prompts are likely to shift as systems develop! Trying multiple prompts and making adjustments based on your outputs is a good first step.