From the course: Developing RAG Apps with LlamaIndex and Next.js

Unlock this course with a free trial

Join today to access over 25,200 courses taught by industry experts.

LlamaIndex core concepts: Loaders index

LlamaIndex core concepts: Loaders index

When we talk about LLAMA index, the main concept is that we have what we call the knowledge base, and the knowledge base, of course, is where all the information is saved. In this case, in the form of vector space or vector database, and then the relevant context is gotten or is taken from that, extracted from that, and then that's what it's passed through the large LLAMA model, and then a query comes in and that's what's put all together to get a response. And so this is the process that we'll also know as retrieval augmented generation, in essence, as we've seen. So LLAMA answers questions across your own data in two stages. It has the indexing stage and it also has the querying stage. Now let's look at the indexing stage. Well, the indexing stage is very important because this is where we have to first have the data source and the data source goes through the process of data loading. So we have the data loader which then loads the documents and those documents are then indexed and…

Contents