From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices

Unlock this course with a free trial

Join today to access over 24,800 courses taught by industry experts.

Learning objectives

Learning objectives

- Welcome to Lesson 1: Introduction to AI Threats and LLM Security. In this lesson, you will gain a foundational understanding of the security landscape surrounding large language models, or LLMs, and AI-related systems. We will explore the significance of LLMs in today's AI landscape, and we will discuss the many threats that these models face. You will be introduced to critical concepts, such as Retrieval-Augmented Generation, or RAG, the OWASP top 10 risks for LLMs, the MITRE ATLAS Framework, and the NIST taxonomy and terminology of attacks and mitigations. By the end of this lesson, you will have a comprehensive overview of the key security challenges and the frameworks that are relevant to protecting AI systems. This knowledge will serve as a foundation for the subsequent sections of this training, where you will deep dive into different threats and mitigation strategies. Let's get started.

Contents