Skip to Main Content

Generative AI

McQuade Library's guide to using generative artificial intelligence in your academic and creative work.

Using the CRISPE Framework for Prompt Engineering

Try structuring your prompts with the CRISPE method: 

  • Capacity and Role: Define your capacity and role (i.e. "I'm a junior marketing major") and/or the role you'd like the LLM to take on (i.e. "Pretend you're my professor").
  • Insight: Provide an appropriate level of context to get the best output from the LLM. Define the project you're working on, the topic you're researching, or the main goal you'd like to accomplish (i.e. "I'm working on a rebranding proposal for a local bicycle shop").
  • Statement: Identify exactly what you need help with and be specific in what you ask the LLM to do (i.e. "Help me structure the proposal into key areas of brand identity").
  • Personality: You may ask an LLM to help you adjust tone for a certain audience (i.e. "Explain brand identity in a way that is professional yet laid back and avoids jargon").
  • Experiment: Since LLMs are chat bots, you can always ask them to regenerate content if it's missing the mark, or give you fresh examples. 

Every interaction you have with an LLM does not need to adhere to every one of these guidelines. Experiment, play around, and see what works and what does not!

Developed from: Dinkevych, D. (2023, May 29). CRISPE - ChatGPT prompt engineering framework. Medium.

Things to Try

Analyze Text Samples

Need help parsing large amounts of information? Provide an LLM with text and ask it to analyze the text. Upload data in a spreadsheet, .doc or .pdf file, or paste in a few paragraphs and ask it questions about the information.

Say something like...

I did a survey on my college campus of student engagement in campus activities, clubs and organizations. Can you help me analyze the survey responses in this spreadsheet? Start by breaking down statistics of data in column A.

Thought Partner

Need help choosing or narrowing down a research topic? Instruct an LLM to ask you questions to help you focus in on a topic that interests you and is an appropriate scope for your project.

Say something like...

I'm a college sophomore. I need to write a paper for my Social Justice class but I'm having trouble choosing a topic I'm interested in. The paper should be related to social services. Can you ask me one question at a time to help me choose a specific topic?

Role Play

Need to practice for a job interview, presentation, or other important event? Have an LLM play the role of an interviewer, professor, presentation audience member, etc. to give you practice. 

Say something like...

I'm a college senior presenting a poster at my school's student research conference. I'm presenting on my capstone project investigating the barriers to entry for underrepresented groups in STEM fields. My major findings were that mentorship programs significantly improve retention rates among minority students pursuing STEM degrees. Can you help me anticipate questions that people might ask? You play the role of someone viewing my poster. Ask me a question, I'll answer it, and then you can give me feedback on my response.

Preventing Hallucinations in Large Language Models

Large language models (LLMs) are built on vast amounts of text data. They are trained to understand and generate human language and often take the form of conversational chat bots such as ChatGPT.

LLMs are powerful tools, but they have limitations. Users should understand what they can and cannot do in order to get the best results from their use. If you ask an LLM to do something it cannot do, it might tell you it doesn't know the answer to your question, but it's just as likely to hallucinate.

What are hallucinations? To put it simply, when a generative AI tool hallucinates, it makes up information that is false, misleading, or nonsensical. It presents this information as fact, so sometimes users don't realize that the output was a hallucination. You can avoid hallucinations by understanding LLMs' capabilities and adhering to the following guidelines:

  • Don't ask an LLM to cite the source of information it has shared with you. Most LLMs are unable to access internet sources in real-time. Asking an LLM to share the source of information it's drawing on may cause a hallucination.*
  • Don't ask an LLM how it works. LLMs do not have metacognition, meaning they have no understanding of how they work. In fact, LLMs frequently hallucinate their own capabilities. 
  • Don't use an LLM as a search engine. Even AIs that are positioned as search engines tend to make mistakes and misattribute sources.
  • Don't ask an LLM to do anything other than produce text or code. LLMs are trained to write sentences, not to know facts.
  • Don't ask an LLM to provide information that you can't independently confirm or deny is true.

Be aware that even when you follow the guidelines above, LLMs may hallucinate, make coding mistakes, or provide biased information. LLMs confidently present hallucinated responses as fact. LLMs are often more likely to confidently provide incorrect information than they are to tell you they don't know the answer to a question. Always be critical of information you've received from an LLM.

*Some LLMs, such as Perplexity, are capable of providing links to web sources.

Developed from: Tatarian, A. & Scudder, P. (2024, June 3). Beyond science fiction: AI's past, present, & impact on libraries & education [Conference presentation]. 2024 Joint ARCLNEC - NELIG Conference, Worcester, MA, United States.

For questions or feedback contact the McQuade Library
Call us: 978-837-5177 | Text us:  978-228-2275 | Email us: mcquade@merrimack.edu