“The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth.”
Seth Lazar, Jeremy Howard, and Arvind Narayanan, "Is Avoiding Extinction from AI Really an Urgent Priority?"
“[W]e have a tool that is capable of great benefit, but also of considerable harm, that is available to billions. The creators of these technologies are not going to be able to tell us how to maximize the gain while avoiding the risk, because they don’t know the answers themselves.”
Ethan Mollick,"The Best Available Human Standard"
As with any new and developing technology, experts and users are still working to understand potential drawbacks, ethical concerns, environmental impacts, and societal implications of artificial intelligence. This page of the guide provides perspectives on the major concerns that have been identified.
Limits of AI's effectiveness, "snake oil," and overhyped AI products:
Disinformation, misinformation, and bias:
AI and academic integrity:
Keeping up with AI-related news:
Large language models (LLMs) are built on vast amounts of text data. They are trained to understand and generate human language and often take the form of conversational chat bots such as ChatGPT.
LLMs are powerful tools, but they have limitations. Users should understand what they can and cannot do in order to get the best results from their use. If you ask an LLM to do something it cannot do, it might tell you it doesn't know the answer to your question, but it's just as likely to hallucinate.
What are hallucinations? To put it simply, when a generative AI tool hallucinates, it makes up information that is false, misleading, or nonsensical. It presents this information as fact, so sometimes users don't realize that the output was a hallucination. You can avoid hallucinations by understanding LLMs' capabilities and adhering to the following guidelines:
Be aware that even when you follow the guidelines above, LLMs may hallucinate, make coding mistakes, or provide biased information. LLMs confidently present hallucinated responses as fact. LLMs are often more likely to confidently provide incorrect information than they are to tell you they don't know the answer to a question. Always be critical of information you've received from an LLM.
*Some LLMs, such as Perplexity, are capable of providing links to web sources.