chat loading...
Skip to Main Content

Generative AI

Welcome!

This guide provides a basic overview of generative AI, including ethical issues and responsible use, in order to help you make informed choices.

Generative AI tools are rapidly changing. We've included dates for materials and will keep updating as often as possible.

Keeping up on AI News

What is Generative AI, and How Does It Work?

Generative AI is a subfield of artificial intelligence that develops AI models to create new content, such as text, images, videos, audio, and code.

This technology relies on large datasets in order to “train” the models. During training, the model identifies patterns, associations, and characteristics within the training data to build statistical representations of the data. 

When a user provides a prompt, the model activates the patterns learned from the training data to generate an output. It is a probabilistic model, in that it predicts the most likely next element (a word in a sentence, a pixel in an image) to produce outputs that match prompt instructions.

Generative AI depends on human labor. Companies scraped the training data from the open Internet, including websites, social-media posts, and other online material, as well as pirated collections of copyrighted works. Data workers review and annotate data and model outputs, flagging inaccurate, offensive, or harmful responses. 

Important to Know

Accuracy & Bias

  • Generative AI models are trained on large datasets in order to learn patterns and build associations. Datasets, training, and prompting techniques all impact how accurate or inaccurate genAI models are in their outputs.
  • Generative AI may produce inaccurate information (often called “hallucinations”). You should verify information you are getting from generative AI.  
  • Generative AI models have been shown to reflect or amplify social biases embedded in their training datasets related to race, gender, and other identities. 
    • Companies usually do not share exactly what datasets their models are trained on, making it difficult to assess potential sources of bias.

User Data & Privacy

  • Many generative AI platforms collect user data, including information you input into the system (“prompts”), and may use this data to further train their modelsYou will usually need to proactively opt out of data collection or training.
  • If you use a personal or free account, you may have little to no privacy and data protection. Check Amherst IT's tool ratings and College guidelines to mitigate privacy and data risks. Before you start using a tool, make sure you understand its privacy, security, and accessibility, along with associated risks.

Model capability shifts

  • Models are changing constantly. Experimenting with different prompts and evaluating the results will help you understand what model may work best for your needs.

Using Generative AI

Getting started with prompts

A prompt is a starting point or instruction you give to the model to generate specific content. It can be in the form of a statement or a question. Think of it as a first step to getting where you want to go.

Making good prompts

There are lots of different prompt techniques you can try, but here are some of the basic elements of a good prompt process:

  • using simple and clear language
  • including context or background information that's relevant to your desired output
  • being specific about the format, length, style, etc. of the output that you want
  • including examples of the desired type of response or format for outputs
  • avoiding prompts that may generate harmful or inappropriate content
  • refining and iterating in response to outputs

Prompting is dynamic

Refining and iterating is important because prompting is a dynamic process. Systems may produce different outputs to the same prompt over time, or be sensitive to slight wording changes in prompts (ex: "fair" instead of "just").

Important: Generative AI tools may operate in a conversational manner, but models do not "think" in a way a person would. If you ask a generative AI model to explain itself, it will simply provide an output that sounds like a plausible explanation for its behavior. If you find that your tool is not providing good outputs, it's best to start over with a different prompt strategy or consider if the tool is not a good fit.