Skip to Main Content

Generative AI

McQuade Library's guide to using generative artificial intelligence in your academic and creative work.

Background

As with any new and developing technology, experts and users are still working to understand potential drawbacks, ethical concerns, environmental impacts, and societal implications of artificial intelligence. This page of the guide provides perspectives on the major concerns that have been identified.

Information Literacy

Limits of AI's effectiveness, "snake oil," and overhyped AI products:

Disinformation, misinformation, and bias:

AI and academic integrity:

Keeping up with AI-related news:

Preventing Hallucinations in Large Language Models

Large language models (LLMs) are built on vast amounts of text data. They are trained to understand and generate human language and often take the form of conversational chat bots such as ChatGPT.

LLMs are powerful tools, but they have limitations. Users should understand what they can and cannot do in order to get the best results from their use. If you ask an LLM to do something it cannot do, it might tell you it doesn't know the answer to your question, but it's just as likely to hallucinate.

What are hallucinations? To put it simply, when a generative AI tool hallucinates, it makes up information that is false, misleading, or nonsensical. It presents this information as fact, so sometimes users don't realize that the output was a hallucination. You can avoid hallucinations by understanding LLMs' capabilities and adhering to the following guidelines:

  • Don't ask an LLM to cite the source of information it has shared with you. Most LLMs are unable to access internet sources in real-time. Asking an LLM to share the source of information it's drawing on may cause a hallucination.*
  • Don't ask an LLM how it works. LLMs do not have metacognition, meaning they have no understanding of how they work. In fact, LLMs frequently hallucinate their own capabilities. 
  • Don't use an LLM as a search engine. Even AIs that are positioned as search engines tend to make mistakes and misattribute sources.
  • Don't ask an LLM to do anything other than produce text or code. LLMs are trained to write sentences, not to know facts.
  • Don't ask an LLM to provide information that you can't independently confirm or deny is true.

Be aware that even when you follow the guidelines above, LLMs may hallucinate, make coding mistakes, or provide biased information. LLMs confidently present hallucinated responses as fact. LLMs are often more likely to confidently provide incorrect information than they are to tell you they don't know the answer to a question. Always be critical of information you've received from an LLM.

*Some LLMs, such as Perplexity, are capable of providing links to web sources.

Developed from: Tatarian, A. & Scudder, P. (2024, June 3). Beyond science fiction: AI's past, present, & impact on libraries & education [Conference presentation]. 2024 Joint ARCLNEC - NELIG Conference, Worcester, MA, United States.

For questions or feedback contact the McQuade Library
Call us: 978-837-5177 | Text us:  978-228-2275 | Email us: mcquade@merrimack.edu