Skip to Main Content Research Guides | Library | Amherst College

Generative AI

Welcome!

This guide provides a basic overview of generative AI, including ethical issues and responsible use, in order to help you make informed choices about using AI tools.

Generative AI tools are rapidly changing. We've included dates for materials and will keep updating as often as possible.

Keeping up on AI News

What is Generative AI?

"Generative A.I.: Technology that creates content — including text, images, video and computer code — by identifying patterns in large quantities of training data, and then creating original material that has similar characteristics." (“Artificial Intelligence Glossary: Neural Networks and Other Terms Explained")

Generative AI refers to artificial intelligence models that can create new content, such as text, images, code, audio, or video.

Users provide "prompts," which are instructions that tell the AI what kind of content to generate.

How does it work?

Generative AI models are developed through a "training" process using large datasets. During training, the AI identifies patterns, associations, and characteristics within the training data to build statistical representations.

When a user provides a prompt, generative AI uses the patterns it has learned from the training data to create new content. It predicts the next likely element (like a word in a sentence or a pixel in an image) to produce outputs that match the user’s instructions.

Important to Know

Accuracy & Bias

  • Generative AI models are trained on large datasets in order to learn patterns and build associations. Datasets, training, and prompting techniques all impact how accurate or inaccurate genAI models are in their outputs.
  • Generative AI may produce inaccurate information (often called “hallucinations”). You should verify information you are getting from generative AI.  
  • Generative AI models have been shown to reflect or amplify social biases related to race, gender, and other identities that are embedded in their training datasets.
    • Companies often will not share exactly what datasets their models are trained on, making it difficult to assess potential sources of bias.

User Data & Privacy

  • Many generative AI platforms collect user data, including information you input into the system (“prompts”), and may use this data to further train their models. You will usually need to “opt-out” of data collection or training.
  • If you use a personal or free account, you may have little to no privacy and data protection.
    • Check Amherst IT's tool ratings and College guidelines to mitigate privacy and data risks.

Model capability shifts

  • Models are changing constantly. Experimenting with different prompt techniques and evaluating the results will help you understand what works best for your needs.

Generative AI Tools

Generative AI applications may be used in text, images, video, audio, code, and multimodal interactions. Some popular examples include:

General purpose

  • ChatGPT
  • Google Gemini
  • Claude 3
  • Microsoft CoPilot

Image generation

  • Midjourney
  • Adobe Firefly
  • DALL-E 2

Coding assistance

  • Github Copilot

Many tools now allow multimodal interaction through text, voice, images, or video.

Keeping current on tools

Generative AI tools are proliferating rapidly. In the academic space, Ithaka S+R has created a Generative AI Product Tracker.

It's important to review the privacy, security, safety, and ethical aspects of any generative AI platform you're using.

Using Generative AI

Privacy, security, accessibility

Generative AI tools differ in their levels of security, privacy, and accessibility features. Before you start using a tool, make sure you understand its privacy, security, and accessibility levels, along with associated risks.

Check Amherst IT's generative AI tool ratings and review the recommendations on that page in order to mitigate risks to yourself and others.


Getting started with prompts

A prompt is a starting point or instruction you give to the model to generate specific content. It can be in the form of a statement or a question. Think of it as a first step to getting where you want to go.

Making good prompts

There are lots of different prompt types you can try, but here are some of the basic elements of a good prompt process:

  • using simple and clear language
  • including context or background information that's relevant to your desired output
  • being specific about the format, length, style, etc. of the output that you want
  • including examples of the desired type of response or format for outputs
  • avoiding prompts that may generate harmful or inappropriate content
  • refining and iterating in response to outputs

Prompting is dynamic

Refining and iterating is important because prompting is a dynamic process. Systems may produce different outputs to the same prompt over time, or be sensitive to slight wording changes in prompts (ex: "fair" instead of "just").

Important: Generative AI tools may operate in a conversational manner, but they do not "think" in a way a person would. If you ask a GenAI tool to explain itself, it will not be able to do this! It will provide a plausible output that sounds like a good explanation instead. If you find that your tool is not providing good outputs, it's best to start over with a different prompt strategy.