It can be helpful to clarify your reasons and goals for using generative AI, while being mindful of potential costs or harms.
You can ask some or all of the following questions:
It can also help to learn more about the datasets, development, and design goals of these tools in order to understand whether they will meet your needs. This can be challenging, as many projects lack transparency around these issues.
Generative AI may create plausible-seeming outputs that are entirely inaccurate, or include factual errors in responses. There is no obvious indication that a tool or platform has made a mistake, and systems vary in their responses to users pointing out errors.
In AI research tools, this most commonly happens with "fake citations," or citations to works that do not exist. AI tools may also produce inaccurate summaries or analyses of texts, by including information not in the original work or excluding or misrepresenting information.
You should have a way to cross-check any information you get from a generative AI tool, using external sources. Don't rely on sources mentioned by a generative AI tool without directly accessing those sources yourself, and verify summaries by consulting the original source(s).
Generative AI can be used to create misinformation and disinformation at scale. It can be difficult to determine whether content has been created or manipulated by people using generative AI. Practice critical information literacy processes like SIFT:
Stop
Investigate the Source
Find better coverage
Trace claims, quotes, media back to the original context
SIFT was created by Mike Caulfield. Clark College Librarians have created a LibGuide illustrating these moves - Evaluating Information: SIFT.
Many generative AI tools collect your chat / prompt history by default and use these to further train their models. Your input may also be reviewed by employees at the company for quality control or abuse prevention.
Data privacy and security practices vary widely across tools, and there may be different protections for enterprise-level models. If you have questions about a tool Amherst subscribes to, you can ask our campus IT department.
As a rule of thumb, it's best to not share anything with a generative AI tool that you do not wish to become public, or that you do not want to see "remixed" by a generative AI application.
There are also unresolved questions related to authorship and accountability when using generative AI tools.
Some basic guidelines for AI-augmented authorship are to be:
While specific policies will vary, Boston College Libraries has summarized some key elements of most AI publishing policies:
When writing an article or conducting peer review, keep in mind that uploading the article to a generative AI website exposes the text to third parties, risking confidentiality when AI platforms utilize user data in training and enhancing their models.
Communities and organizations are working to develop design protocols, community agreements, and other frameworks to support ethical development and use of generative AI (and AI generally). This is particularly important for historically marginalized groups, given the risks of bias, misrepresentation, erasure, extraction, and exploitation in AI projects.
Asking critical questions about generative AI can uncover harms and illuminate power dynamics in the design, use, and assessment of tools and platforms. You can start with technoskeptical questions like the following:
You can also ask questions about the design and development process, including the specific decisions that designers have made when selecting data sets, training models, and evaluating outputs.
Mozilla Foundation provides an overview of this process as well as critical questions:
The human decisions that shape generative AI: Who is accountable for what? Mozilla Foundation Insights Blog. Aug. 2, 2023. By Stefan Baack.