From the course: Developing RAG Apps with LlamaIndex and Next.js
Unlock this course with a free trial
Join today to access over 25,200 courses taught by industry experts.
LlamaIndex core concepts: Loaders index
From the course: Developing RAG Apps with LlamaIndex and Next.js
LlamaIndex core concepts: Loaders index
When we talk about LLAMA index, the main concept is that we have what we call the knowledge base, and the knowledge base, of course, is where all the information is saved. In this case, in the form of vector space or vector database, and then the relevant context is gotten or is taken from that, extracted from that, and then that's what it's passed through the large LLAMA model, and then a query comes in and that's what's put all together to get a response. And so this is the process that we'll also know as retrieval augmented generation, in essence, as we've seen. So LLAMA answers questions across your own data in two stages. It has the indexing stage and it also has the querying stage. Now let's look at the indexing stage. Well, the indexing stage is very important because this is where we have to first have the data source and the data source goes through the process of data loading. So we have the data loader which then loads the documents and those documents are then indexed and…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
(Locked)
LlamaIndex core concepts: Loaders index3m 52s
-
(Locked)
The querying stage: Overview2m 16s
-
(Locked)
Querying stage: ChatEngine and querying engine full overview4m 47s
-
(Locked)
Hands-on: Create a custom RAG system with LlamaIndex15m 7s
-
(Locked)
Hands-on: Structured data extraction6m 47s
-
(Locked)
Hands-on: Querying a PDF file5m 55s
-
(Locked)
Hands-on: Interacting with a RAG system through an Express API, full hands-on14m 28s
-
(Locked)
Summary2m 32s
-
(Locked)
-
-
-
-