Overview
This survey provides a systematic review of Retrieval-Augmented Generation (RAG) for LLMs, covering the paradigm's evolution, core components, and advanced techniques. Essential reading for understanding how external knowledge enhances language models.
RAG Paradigm
Basic Pipeline
Evolution
Retrieval Enhancements
Pre-Retrieval
Retrieval Methods
Post-Retrieval
Generation Strategies
Context Integration
Fusion Approaches
Advanced Techniques
Iterative RAG
Multiple retrieval-generation cycles:
Self-RAG
Model decides when and what to retrieve:
Knowledge Graphs + RAG
Structured knowledge integration:
Evaluation
Metrics
Benchmarks
Relevance to Agent Memory
RAG provides the foundation for agent memory systems:
Key Takeaways
Citation
@article{gao2023retrieval,
title={Retrieval-Augmented Generation for Large Language Models: A Survey},
author={Gao, Yunfan and others},
journal={arXiv preprint arXiv:2312.10997},
year={2023}
}