Predicting the Performance of Retrieval Augmented Generation
Sun 19.01 13:00 - 13:30
- Graduate Student Seminar
- Bloomfield 424
Abstract: Retrieval-augmented generation (RAG) enhances large language models (LLMs) by integrating external knowledge, yet the inclusion of irrelevant or distracting retrieved content can degrade text generation quality. This study focuses on the core task of text completion using LLMs with in-context learning, leveraging both traditional and novel query performance prediction methods to anticipate and improve generation quality. Our approach aims to bridge the gap between retrieval accuracy and effective language model output, advancing the robustness and reliability of RAG systems.