From the course: Designing Agentic AI Products (No Code Required)
Unlock this course with a free trial
Join today to access over 25,300 courses taught by industry experts.
AI hallucinations and agents
From the course: Designing Agentic AI Products (No Code Required)
AI hallucinations and agents
- [Instructor] Do you remember what AI hallucination is? It is the strange behavior of GenAI to give an answer that is completely made up with total confidence. For example, if your customer asks in a chatbot support system, "What is the warranty cost for a 10-year warranty?" And your business does not have a 10-year warranty, the LLM may make up an answer and say it costs $10. Then you have to do damage control and earn the customer's trust again. Agent AI automates LLM, so it has to be built with awareness of LLM hallucination. Otherwise, Agent AI will believe what the LLM says and may proceed on decisions which can be harmful for the business and for people. You have two options to mitigate for hallucinations. You can add a human-in-the-loop option to check for GenAI hallucination, or you could design a validation agent to double-check the response of agents against a validation dataset to flag when a LLM hallucinates.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.