From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Learning objectives
From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
Learning objectives
- Welcome to Lesson 2: Understanding Prompt Injection and Insecure Output Handling. In this lesson, we will go over two critical security issues that are affecting large language models and their implementations. Those are prompt injection attacks and insecure output handling. You will learn what prompt injection attacks are and explore real life examples and understand how these vulnerabilities can be exploited by attackers. We will cover best practices for mitigating this risk, including ChatML for secure API calls, enforcing privilege controls on LLM access, and adhering to the OWASP Application Security Verification Standard, or ASVS, for protecting against insecure output handling as well. Let's get started.
Contents
-
-
-
-
(Locked)
Learning objectives1m 1s
-
Defining prompt injection attacks11m 41s
-
(Locked)
Exploring real-life prompt injection attacks3m 57s
-
(Locked)
Using ChatML for OpenAI API calls to indicate to the LLM the source of prompt input10m 4s
-
(Locked)
Enforcing privilege control on LLM access to back-end systems6m 10s
-
(Locked)
Best practices around API tokens for plugins, data access, and function-level permissions3m 2s
-
(Locked)
Understanding insecure output handling attacks3m 22s
-
(Locked)
Using the OWASP ASVS to protect against insecure output handling4m 43s
-
(Locked)
-
-
-
-
-