From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Using ChatML for OpenAI API calls to indicate to the LLM the source of prompt input
From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
Using ChatML for OpenAI API calls to indicate to the LLM the source of prompt input
- [Narrator] Let's go over three different concepts in AI. And as a matter of fact, AI security. One is system prompts. What are system prompts and prompt templates, what is an attack called meta prompt extraction, and also toxicity attacks as well. So understanding these elements is definitely important for comprehending different vulnerabilities and potential exploits against LLM, or SLM-based applications. Right? So let's start with system prompts. What you're seeing in the screen is basically code that I use for other training, and specifically this one is for retrieval augmented generation for cybersecurity. So in this case, using AI. In this course we're actually learning about how to secure AI implementations. But let me actually go over at least a few examples of prompt templates or or system prompts, right? So I have a couple of them in here in the screen. But to define what our system prompts, you know, system prompts are instructions or context given to the AI model that…
Contents
-
-
-
-
(Locked)
Learning objectives1m 1s
-
Defining prompt injection attacks11m 41s
-
(Locked)
Exploring real-life prompt injection attacks3m 57s
-
(Locked)
Using ChatML for OpenAI API calls to indicate to the LLM the source of prompt input10m 4s
-
(Locked)
Enforcing privilege control on LLM access to back-end systems6m 10s
-
(Locked)
Best practices around API tokens for plugins, data access, and function-level permissions3m 2s
-
(Locked)
Understanding insecure output handling attacks3m 22s
-
(Locked)
Using the OWASP ASVS to protect against insecure output handling4m 43s
-
(Locked)
-
-
-
-
-