From the course: Build with AI: LLM-Powered Applications with Streamlit
Unlock this course with a free trial
Join today to access over 25,200 courses taught by industry experts.
Construct effective RAG prompts for better LLM answers - Python Tutorial
From the course: Build with AI: LLM-Powered Applications with Streamlit
Construct effective RAG prompts for better LLM answers
- [Instructor] This video will purely be front end. In a real retrieval augmented generation pipeline, your app will automatically fetch relevant chunks of context, using Vice. But before you automate that, it's helpful to manually construct the RAG prompts so you understand the formatting structure. This is a valuable skill since it shows how context and questions combine to guide the AI's response. In the next lesson, you'll automate this, using embeddings and vector search. For now though, let's manually simulate the middle part of that flow. Let's work with the file 03_07b.py, and you'll import your streamlet package and write a title such as Construct RAG Prompts. You'll then want to provide a text area for pacing context snippets. So again, this is going to be your user pasting these snippets in. So you'll have context_snippets = st.text_area. You'll have parentheses, and then you can put a prompt in here, such as Paste retrieved context snippets, and then make sure you note…
Contents
-
-
-
-
-
(Locked)
How the document Q&A chatbot works5m 20s
-
(Locked)
Introducing Explore California5m 1s
-
(Locked)
Prepare text data for embedding7m 45s
-
(Locked)
Generate embeddings from text for searchability7m 40s
-
(Locked)
Create a Faiss vector store for fast retrieval5m 38s
-
(Locked)
Query the vector database to find relevant information8m 14s
-
(Locked)
Construct effective RAG prompts for better LLM answers6m 8s
-
(Locked)
Use the RAG query function to combine search and chat8m 6s
-
(Locked)
-
-