From the course: Hands-On AI: Building LLM-Powered Apps
Unlock the full course today
Join today to access over 25,200 courses taught by industry experts.
Solution: Fixing hallucination via prompting - Python Tutorial
From the course: Hands-On AI: Building LLM-Powered Apps
Solution: Fixing hallucination via prompting
- [Instructor] Welcome back. I trust you must have a very fun lab session iterating on the prompts with all the possibilities and all the knobs that we can tune. There are many, many ways to resolve this problem, and here, I will demonstrate my solution. Your solution might be different, but as long as it produces the intended results, it's awesome. So let's start by uploading the document again and ask the question we want to fix. What is the operating margin? And let's navigate down to the prompt playground. Click on this little bug icon, and we are in the prompt playground. So the first thing we will do is we will investigate what is the current prompt. So the current prompt provide the actions that it wants to do first by saying, "Given the extracted part of a long document and a question, create final question with references. If you don't know the answer, just say that you don't know. Don't try to make up the answer." So this is the action in the RACEF Framework. And it says…