From the course: Leading Responsible AI in Organizations
Transparency and explainability to cultivate trust
From the course: Leading Responsible AI in Organizations
Transparency and explainability to cultivate trust
- Over the past several years, I've asked hundreds of employees about their top concerns in AI. Two of the common themes that continue to emerge are transparency and explainability. They've shared that more and more, they are being asked to explain to customers how their organization's AI systems make decisions. Sometimes these AI automated decisions can make it harder for a customer to get, let's say, healthcare treatments, a mortgage, or even some job opportunities. So I'd like to offer an approach to help leaders support their employees in responding to these inquiries. But first, let's talk about why this is so important. Let's say your AI system is trained to automate decision making, such as deciding whether or not to approve a credit card for a potential customer. Are you confident that your customer-facing employees understand why and how algorithms make these decisions? Because I guarantee you, your customers want to know, and in my research, employees who are unable to answer these questions feel inadequately prepared to do their jobs. Without this information, they're unable to support requests from customers and also their leadership. This is where you as a leader comes in. You can help cultivate trust and comprehension with your employees so they know how to handle these situations when they come up. So here are three best practices leaders have shared with me about how they do this. First, engage, create opportunities for employees to provide feedback. This could be in the form of a town hall, an all hands meeting, surveys, or even small working groups. You can use these opportunities to ask questions about your employees' experiences with AI. And second, involve your employees. Work with them to determine how best to address feedback received during your listening sessions. This opportunity helps to establish a longer term plan for ensuring transparency can be integrated into business practices. And third, allow employees to lead as responsible AI champions. This could mean establishing formal channels, and to find the best solutions to enhance your overall responsible AI culture. Creating employee-led opportunities can help build a culture of transparency and explainability. Employees want to be engaged. This can lead to shared responsibility and awareness, improving your organization's ability to meet your customer needs. I bet if you initiate an opportunity to hear from your employees, you will learn that they are doing the best they can to respond to customer requests, and you can help them go a step further. Start by asking your customer-facing teams a basic question, "Would you know how to handle customer questions around our use of AI?" Employees want their organizations to succeed, and you can help make their experience better and lead to success for everyone.
Contents
-
-
-
Leading responsible AI with ethics as core values3m 58s
-
Ensuring data governance as responsible leadership3m 20s
-
Transparency and explainability to cultivate trust3m 28s
-
Regulatory compliance as a standard of integrity3m 5s
-
Creating a responsible AI hub of excellence2m 44s
-
Accountability and security as a fundamental practice3m 17s
-
Inclusive collaboration in AI development3m 18s
-
Employee stakeholder collaboration as a partnership model3m 28s
-
Cultivating continuous responsible AI learning2m 43s
-
Organizational responsiveness in AI ethics2m 46s
-
External engagement as a responsible leadership approach3m
-
Sustainability as an ethical obligation3m 15s
-