From the course: Building Trustworthy AI Systems: Transparency, Explainability, and Control with ISO/IEC TR 24028
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
Explainability (Clauses 9.3-9.3.7)
From the course: Building Trustworthy AI Systems: Transparency, Explainability, and Control with ISO/IEC TR 24028
Explainability (Clauses 9.3-9.3.7)
- [Instructor] Do you recall any time as a child acting out embarrassingly? Do you remember two of the first questions your parents or guardians asked you? Why did you do this, and what were you thinking? The what were you thinking question wants you to explain your thought process. The why you did this question is trying to get to the motivations of this decision. Those questions are related to explainability, which is the connection between the AI system's internal actions and the justification for those actions. In this video, I will share how risks to the trustworthiness of an AI system are managed more effectively when the explainability addresses how AI systems function. Descriptions of the AI system's function should be tailored to individual differences, such as user's role, knowledge, and skill level. Understanding the underlying function of a system creates a better path to debugging, monitoring, documenting, auditing, and governance. ISO 24028 states three modes of AI…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
-
(Locked)
Transparency (Clauses 9-9.2)2m 39s
-
(Locked)
Explainability (Clauses 9.3-9.3.7)3m 6s
-
(Locked)
Controllability, bias, and privacy (Clauses 9.4-9.6)5m 42s
-
(Locked)
Reliability, resilience, and robustness (Clauses 9.7-9.9)2m 37s
-
(Locked)
Testing (Clauses 9.10-9.10.2.7)6m 4s
-
(Locked)
Evaluation (Clauses 9.10.3-9.10.5)4m 52s
-
(Locked)
Use and applicability (Clauses 9.11-9.11.4)2m 17s
-
(Locked)
-