XAI and Interpretability in Cybersecurity
With Stephanie Itimi
Liked by 93 users
Duration: 2h 22m
Skill level: Advanced
Released: 11/22/2024
Course details
This course unlocks the power of explainable AI for smarter cybersecurity decisions, equipping you to understand AI reasoning and address its limitations for transparency and effectiveness. Instructor Stephanie Itimi delves into the mechanics of XAI models, exploring model-agnostic (LIME, SHAP) and model-specific (decision trees, feature interaction maps) techniques. She provides tools for hands-on analysis of AI models in cybersecurity scenarios, showing you how to apply XAI at both strategic and operational levels. Check out this course to employ confident, informed use of AI in cybersecurity decision-making within an organization.
Skills you’ll gain
Earn a sharable certificate
Share what you’ve learned, and be a standout professional in your desired industry with a certificate showcasing your knowledge gained from the course.
LinkedIn Learning
Certificate of Completion
-
Showcase on your LinkedIn profile under “Licenses and Certificate” section
-
Download or print out as PDF to share with others
-
Share as image online to demonstrate your skill
Meet the instructor
Learner reviews
-
Anne Pepita Francis
Anne Pepita Francis
Cybersecurity Specialist | Governance Risk & Compliance | Vulnerability Management | Cloud Security (AWS & GCP & Azure) | CSPM | Data Protection & AI…
Contents
What’s included
- Practice while you learn 1 exercise file
- Test your knowledge 6 quizzes
- Learn on the go Access on tablet and phone