Skip to Main Content

AI for Faculty

Information on this guide has been modified from the University of Arizona Libraries, licensed under a Creative Commons Attribution 4.0 International License.

Fact-checking is always needed

AI "hallucination"

'AI hallucination' is a term that has already entered public discourse. It refers to fictitious information AI generates as part of its results. Because these systems are probabilistic, not deterministic, AI tools may sometimes pass off made-up content as fact. As these systems become more sophisticated, hallucinations are reduced, but can still sometimes occur when dealing with LLMs.

One specific area where hallucinations are especially alarming is source citations. This is one reason why LLMs don't usually make very good independent research agents. In addition to checking the accuracy and validity of information an LLM generates, it is essential to make sure that the sources it cites are real, relevant, and correct.

When incorporating AI literacy into your courses, it is important to remind students to be on the lookout for false or incorrect citations and to verify the information they receive from an LLM such as ChatGPT, Copilot, etc.

 

Reducing inaccuracies and falsehoods

There has been significant progress in making these systems more truthful by grounding them in external sources of knowledge. Some examples are Microsoft CopilotPerplexity AI, and Gemini, which use internet search results to ground answers. However, the Internet sources used, could also contain misinformation or disinformation. To reduce the possibility of getting erroneous or misleading information, it is a good practice to explore the sources cited in the generated content.
 

Scholarly sources as grounding
There are also systems that combine language models with scholarly sources. For example:

  • Elicit
    A research assistant using language models to automate parts of researchers’ workflows. Currently, the main workflow in Elicit is Literature Review. If you ask a question, Elicit will show relevant papers and summaries of key information about those papers in an easy-to-use table. 
  • Consensus

    A search engine that uses AI to search for and surface claims made in peer-reviewed research papers. Ask a plain English research question, and get word-for-word quotes from research papers related to your question. The source material used in Consensus comes from the Semantic Scholar database, which includes over 200M papers across all domains of science.