Hallucination
ConceptDefinition
When a language model generates confident, fluent text that is factually incorrect, fabricated, or contradictory to the source. A fundamental challenge caused by models optimizing for plausible token sequences rather than factual accuracy.
Related Terms
RAG (Retrieval-Augmented Generation)
An architecture that enhances LLM outputs by first retrieving relevant documents from a knowledge base (via vector search) and injecting them into the prompt. Grounds the model in external, up-to-date facts without requiring retraining.
Prompt Engineering
The practice of crafting inputs to elicit optimal model outputs. Encompasses techniques like chain-of-thought, few-shot examples, role prompting, structured output instructions, and system prompt design.
Articles about Hallucination
All Terms
AgentAttention MechanismChain-of-ThoughtContext WindowDiffusion ModelDistillationDPO (Direct Preference Optimization)EmbeddingFew-shot PromptingFine-tuningFunction CallingGAN (Generative Adversarial Network)GRPO (Group Relative Policy Optimization)HallucinationInferenceLLM (Large Language Model)LoRA (Low-Rank Adaptation)MCP (Model Context Protocol)MultimodalPrompt EngineeringQLoRA (Quantized LoRA)QuantizationRAG (Retrieval-Augmented Generation)RLHF (Reinforcement Learning from Human Feedback)TemperatureTokenizerTool UseTop-p (Nucleus Sampling)TransformerVector DatabaseZero-shot Prompting