From the course: The OWASP Top 10 for Large Language Model (LLM) Applications: An Overview

Unlock this course with a free trial

Join today to access over 25,300 courses taught by industry experts.

Preventing improper output handling

Preventing improper output handling

- [Instructor] Now that we understand the risk of improper output handling, let's take a look at how we can defend against it. Here's the most important rule. Never trust large language model output blindly. Always treat large language model output like user input. It could be wrong, unsafe, or misleading. Here's how to reduce the risk. Number one, validate the output format. Ask your large language model to respond in a specific structure like for example, JSON. Then verify the response matches what you expected it to do. If it doesn't, then reject it. Number two, sanitize the output. Strip out anything risky, which is getting generated in the output. If you are expecting text, make sure it doesn't contain any code or tags that could get executed. Number three, filter for unsafe content. Scan the output for any personal data, dangerous commands, or inappropriate language. You can even use another AI model to help you review it. Number four, add human oversight. For sensitive actions…

Contents