From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Learning objectives
From the course: Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
Learning objectives
- Welcome to Lesson 1: Introduction to AI Threats and LLM Security. In this lesson, you will gain a foundational understanding of the security landscape surrounding large language models, or LLMs, and AI-related systems. We will explore the significance of LLMs in today's AI landscape, and we will discuss the many threats that these models face. You will be introduced to critical concepts, such as Retrieval-Augmented Generation, or RAG, the OWASP top 10 risks for LLMs, the MITRE ATLAS Framework, and the NIST taxonomy and terminology of attacks and mitigations. By the end of this lesson, you will have a comprehensive overview of the key security challenges and the frameworks that are relevant to protecting AI systems. This knowledge will serve as a foundation for the subsequent sections of this training, where you will deep dive into different threats and mitigation strategies. Let's get started.
Contents
-
-
-
(Locked)
Learning objectives1m 18s
-
(Locked)
Understanding the significance of LLMs in the AI landscape7m 6s
-
Exploring the resources for this course: GitHub repositories and others2m 54s
-
(Locked)
Introducing retrieval augmented generation (RAG)12m 24s
-
(Locked)
Understanding the OWASP Top 10 risks for LLMs5m 46s
-
(Locked)
Exploring the MITRE ATLAS™ (adversarial threat landscape for artificial intelligence systems) framework5m 38s
-
(Locked)
Understanding the NIST taxonomy and terminology of attacks and mitigations7m 8s
-
(Locked)
-
-
-
-
-
-