2 - Assess Content: Assessing AI-Based Tools for Accuracy
Introduction: analyzing AI-generated information
Although many responses produced by AI text generators are accurate, AI also often generates misinformation. Oftentimes, the answers produced by AI will be a mixture of truth and fiction. If you are using AI-generated text for research, it will be important to be able to verify its outputs. You can use many of the skills you’d already use to fact-check and think critically about human-written sources, but some of them will have to change. For instance, we can’t check the information by evaluating the credibility of the source or the author, as we usually do. We have to use other methods, like lateral reading, which we’ll explain below.
Remember, the AI is producing what it believes is the most likely series of words to answer your prompt. This does not mean it’s giving you the ultimate answer! When choosing to use AI, it’s smart to use it as a beginning and not an end. Being able to critically analyze the outputs that AI gives you will be an increasingly crucial skill throughout your studies and your life after graduation.
When AI gets it wrong
As of summer 2024, a typical AI model isn't assessing whether the information it provides is correct. Its goal when it receives a prompt is to generate what it thinks is the most likely string of words to answer that prompt. Sometimes this results in a correct answer, but sometimes it doesn’t – and the AI cannot interpret or distinguish between the two. It’s up to you to make the distinction.
AI can be wrong in multiple ways:
-
-
- It can give the wrong answer
- It can omit information by mistake
- It can make up completely fake people, events, and articles
- It can mix truth and fiction
-
Explore each section below to learn more.
It can give a wrong or misleading answer
It can make up false information
It cannot accurately produce its sources
It can interpret your prompts in an unexpected way
Lateral reading: your #1 analysis tool
If you cannot take AI-cited sources at face value and you (or the AI programmers) cannot determine where the information is sourced from, how are you going to assess the validity of what AI is telling you? Here you should use the most important method of analysis available to you: lateral reading. Lateral reading is done when you apply fact-checking techniques by leaving the AI output and consulting other sources to evaluate what the AI has provided based on your prompt. You can think of this as “tabbed reading”, moving laterally away from the AI information to sources in other tabs rather than just proceeding “vertically” down the page based on the AI prompt alone.
Watch: how to read laterally Links to an external site.
Watch: how lateral reading helps you sort fact from fiction Links to an external site.What does this process look like specifically with AI-based tools? Learn more in the sections below.
Lateral reading and AI
Instructions: tackle an AI fact-check
Beyond fact-checking
Instructions: go beyond fact checking
Example: let's check an AI-generated response!
Check out the videos below to see these lateral reading strategies in action.
For text and links:
And for scholarly sources:
Additional resources (optional reading)
Explore these additional optional resources for more information on different topics mentioned on this page, as well as references for the content.
Now that you know how to assess the accuracy of AI-generated work, continue onto the next page of this module to learn how to cite it in your academic work!
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License Links to an external site..