Skip to Main Content
Skip to content

AI: Student Guide to ChatGPT, CoPilot and Other AI Resources

Guide content supports the teaching and research goals of multiple departments on campus. Content represents a non-exhaustive selection of essential resources and tools for engaging a wide range of backgrounds and viewpoints

Fact-checking is always needed

AI "hallucination"
The official term in the field of AI is "hallucination." This refers to the fact that it sometimes "makes stuff up." This is because these systems are probabilistic, not deterministic.

Fact-checking AI output is crucial to cut down on these errors, especially as AI becomes more integrated into important decision-making processes. AI systems can often sound confident but still get things wrong. This usually happens because they don’t have all the right information, might be influenced by bias, or simply make guesses based on patterns.

ChatGPT often makes up fictional sources

One area where ChatGPT usually gives fictional answers is when asked to create a list of sources. Since we've had many questions from students about this, we offer this FAQ: I can’t find the citations that ChatGPT gave me. What should I do?

Causes of AI Hallucinations

  • Insufficient or low-quality training data*
  • Incorrect assumptionsphone and computer checking facts
  • Biases in training data
  • Lack of real-world understanding

Consequences of AI Hallucinations

  • Misinformation
  • Damage to reputation
  • Legal and ethical implications

Mitigating AI Hallucinations

  • Fact-checking
  • Improving training data
  • Developing better AI models
  • User awareness
"Vegetative Electron Microscopy"???

For an entertaining, real-world example of low-quality training data, be sure to read "A weird phrase is plaguing scientific papers - and we traced it back to a glitch in AI training data," from The Conversation, April 15, 2025.