🎓 All Courses | 📚 Claude University Syllabus
Stickipedia University
📋 Study this course on TaskLoco

Hallucinations are when an AI model generates confident-sounding but false information.

Why It Happens

  • LLMs predict the next likely token — truth is not the objective
  • Training data has gaps, errors, and contradictions
  • Models extrapolate beyond their knowledge

How to Reduce Hallucinations with Claude

  • Provide source documents via RAG
  • Ask Claude to cite its sources
  • Ask Claude to say "I don't know" when uncertain
  • Use lower temperature for factual tasks

YouTube • Top 10
Claude University: Hallucinations — Why AI Makes Things Up
Tap to Watch ›
📸
Google Images • Top 10
Claude University: Hallucinations — Why AI Makes Things Up
Tap to View ›

Reference:

Anthropic research

image for linkhttps://www.anthropic.com/research

📚 Claude University — Full Course Syllabus
📋 Study this course on TaskLoco

TaskLoco™ — The Sticky Note GOAT