From the course: LLM Evaluations and Grounding Techniques

Unlock this course with a free trial

Join today to access over 25,300 courses taught by industry experts.

Ragas: Evaluation paper

Ragas: Evaluation paper

- [Narrator] So far, we've been very hands-on, learning to ground LLMs with a focus on coding and real life examples. In this chapter, we'll be going a little bit more theoretical, learning how to analyze five grounding papers that have been impactful in the space. We're going to be starting off with a RAG paper called RAGAS. RAGAS is a paper from 2023 that focuses on the automatic evaluation of retrieval augmented generation. Now there are a few interesting things here. It introduces the concept of measuring the correctness of RAG responses. These are broken down into a few different metrics. If we scroll down to page three, we get a definition of these three papers. Faithfulness, answer relevance, and context relevance. We already talked about faithfulness and how well an LLM actually answers a question based on its context. In this case, to automatically estimate faithfulness, they use an LLM to extract a set of statements. And using these statements, they try to have more specific…

Contents