🎓 All Courses | 📚 AI Ethics & Responsible AI Syllabus
Stickipedia University
📋 Study this course on TaskLoco

Hallucination is the tendency of LLMs to generate plausible-sounding but false information — stated with the same confidence as accurate facts.

Why Hallucinations Happen

  • LLMs are trained to produce fluent text, not verified facts
  • They have no internal fact-checking mechanism
  • They generalize patterns even when specific knowledge is absent

High-Risk Hallucination Domains

  • Legal citations (lawyers have been sanctioned for submitting AI-hallucinated case citations)
  • Medical dosages and drug interactions
  • Financial data and statistics
  • Historical facts and biographical details

Mitigation

Retrieval augmented generation (RAG) dramatically reduces hallucinations by grounding responses in verified documents.


YouTube • Top 10
AI Ethics & Responsible AI: AI Hallucinations — When AI Confidently Lies
Tap to Watch ›
📸
Google Images • Top 10
AI Ethics & Responsible AI: AI Hallucinations — When AI Confidently Lies
Tap to View ›

Reference:

Survey of LLM hallucinations

image for linkhttps://arxiv.org/abs/2305.11747

📚 AI Ethics & Responsible AI — Full Course Syllabus
📋 Study this course on TaskLoco

TaskLoco™ — The Sticky Note GOAT