From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices

Unlock this course with a free trial

Join today to access over 24,800 courses taught by industry experts.

Using ChatML for OpenAI API calls to indicate to the LLM the source of prompt input

Using ChatML for OpenAI API calls to indicate to the LLM the source of prompt input

From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices

Using ChatML for OpenAI API calls to indicate to the LLM the source of prompt input

- [Narrator] Let's go over three different concepts in AI. And as a matter of fact, AI security. One is system prompts. What are system prompts and prompt templates, what is an attack called meta prompt extraction, and also toxicity attacks as well. So understanding these elements is definitely important for comprehending different vulnerabilities and potential exploits against LLM, or SLM-based applications. Right? So let's start with system prompts. What you're seeing in the screen is basically code that I use for other training, and specifically this one is for retrieval augmented generation for cybersecurity. So in this case, using AI. In this course we're actually learning about how to secure AI implementations. But let me actually go over at least a few examples of prompt templates or or system prompts, right? So I have a couple of them in here in the screen. But to define what our system prompts, you know, system prompts are instructions or context given to the AI model that…

Contents