From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Understanding RAG, LangChain, Llama index, and AI orchestration
From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
Understanding RAG, LangChain, Llama index, and AI orchestration
- [Instructor] Earlier in the course, you learn about Retrieval-Augmented Generation or RAG, and you learn that it's a machine learning and AI concept that basically aims to enhance the capabilities of generative AI models with external knowledge sourced from a document collection, other tools or, you know, vector databases, right? This is a framework that aims to enhance the quality of the responses produced by language models and basically attaches the model to external knowledge bases, thus enriching the LLM's inherit data representation, and of course a pre-trained data. So at the end of the day, it's actually trying to reduce the likelihood of hallucinations and producing bad information. However, I want to go over a few additional concepts in here and specifically the concepts of orchestration. We're going to talk about libraries like LangChain, LlamaIndex, LangGraph, and many others on how these actually work together in an LLM application. So what you're seeing in front of…