Building Secure and Trustworthy LLMs Using NVIDIA Guardrails
With Nayan Saxena
Liked by 15 users
Duration: 56m
Skill level: Intermediate
Released: 9/13/2024
Course details
Guardrails are essential components of large language models (LLMs) that can help to safeguard against misuse, define conversational standards, and enhance public trust in AI technologies. In this course, instructor Nayan Saxena explores the importance of ethical AI deployment to understand how NVIDIA NeMo Guardrails enforces LLM safety and integrity. Learn how to construct conversational guidelines using Colang, leverage advanced functionalities to craft dynamic LLM interactions, augment LLM capabilities with custom actions, and elevate response quality and contextual accuracy with retrieval-augmented generation (RAG). By witnessing guardrails in action and analyzing real-world case studies, you'll also acquire skills and best practices for implementing secure, user-centric AI systems. This course is ideal for AI practitioners, developers, and ethical technology advocates seeking to advance their knowledge in LLM safety, ethics, and application design for responsible AI.
Skills you’ll gain
Earn a sharable certificate
Share what you’ve learned, and be a standout professional in your desired industry with a certificate showcasing your knowledge gained from the course.
LinkedIn Learning
Certificate of Completion
-
Showcase on your LinkedIn profile under “Licenses and Certificate” section
-
Download or print out as PDF to share with others
-
Share as image online to demonstrate your skill
Meet the instructor
Learner reviews
Contents
What’s included
- Learn on the go Access on tablet and phone