Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
With Omar Santos and Pearson
Liked by 43 users
Duration: 3h 38m
Skill level: Intermediate
Released: 1/23/2025
Course details
This course offers a comprehensive exploration into the crucial security measures necessary for the deployment and development of various AI implementations, including large language models (LLMs) and retrieval-augmented generation (RAG). Discover key considerations and mitigations to reduce the overall risk in organizational AI system development processes. Author and tech trainer Omar Santos covers the essentials of secure-by-design principles, focusing on security outcomes, radical transparency, and building organizational structures that prioritize security. Along the way, learn more about AI threats, LLM security, prompt injection, insecure output handling, red team AI models, and more. By the end of this course, you’ll be prepared to wield your newly honed skills to protect RAG implementations, secure vector databases, select embedding models, and leverage powerful orchestration libraries like LangChain and LlamaIndex.
Skills you’ll gain
Earn a sharable certificate
Share what you’ve learned, and be a standout professional in your desired industry with a certificate showcasing your knowledge gained from the course.
LinkedIn Learning
Certificate of Completion
-
Showcase on your LinkedIn profile under “Licenses and Certificate” section
-
Download or print out as PDF to share with others
-
Share as image online to demonstrate your skill
Meet the instructors
Learner reviews
-
-
-
Solehuddin Muhamad
Solehuddin Muhamad
System Engineer | Digital Transformation Specialist | AI/ML & Robotics Enthusiast | Prompt Engineer
Contents
What’s included
- Learn on the go Access on tablet and phone