From the course: Introduction to Agentic AI Governance
The evolving risks of AI
From the course: Introduction to Agentic AI Governance
The evolving risks of AI
- [Instructor] We are living in exciting times where so much happens in just a week, that it often feels like a whole decade has passed. If you are feeling overwhelmed by how fast AI is moving, you are not alone. Even industry leaders, including me, feel the same way. To put this into context, consider this. I shared the foundations of AI governance on LinkedIn Learning just last year. And here I am today extending that journey from AI governance to agentic AI governance. With the pace at which AI is becoming deeply integrated into every aspect of our lives, we are undergoing a shift in the world order. This is an inflection point in the history of AI that will be defined by those who choose to build robust, reliable, and trustworthy AI. That's precisely what this course focuses on. It is grounded in the principle of highlighting current risk and challenges that must be addressed as we advance toward agentic AI. And when discussing risk, it's important to recognize that not all risks are equal. To understand this, let's look at how the spectrum of risk has evolved from traditional AI systems to generative AI and now agentic ai, the new frontier that's catching everyone's attention these days. But before we get into what makes agentic AI so different, it's worth stepping back to look at some of the risks we have been facing since the early days of AI, starting with data. AI systems are built on data, and I cannot emphasize enough the importance of robust data governance measures. To start with, there is data security, which has long been a challenge. Many organizations still struggle with preventing data leakage and unauthorized access to sensitive information. Then there is data quality, which is often overload. Poor quality data doesn't just impact model performance and accuracy, but can lead to biased and discriminatory outcomes. The risks don't stop there. We have seen adversarial attacks where small manipulations can deceive AI models into making wrong decisions and data poisoning where corrupted training data leads to inaccurate or even dangerous results. So far, we have talked about traditional AI systems. However, the rise of generative AI systems has not only amplified those risks, but also introduced new, more complex concerns. Because of their ability to mimic human language and generate hyperrealistic content, they blur the line between fact and fiction, which can lead to the spread of misinformation, including deepfakes, in addition to raising concerns around hallucinations and intellectual property rights. What's most alarming is the speed and skill at which these harms can spread. And now, in 2025, the spotlight has shifted to agentic AI. While the potential is exciting, the risks are equally significant. These systems can act autonomously, which increases the risk of unintended consequences, raising entirely new ethical concerns. According to the failure mode analysis by Microsoft's AI Red Team, these novel failure modes are unique to agentic AI and arise from complex scenarios such as inter-agent communication within multi-agent systems. Given the evolving nature of technology and the regulatory landscape, no one person or authority has all the answers, but I believe the journey to finding the right answers begins with asking the right questions. In this course, I will walk you through the key risk, challenges and potential unintended consequences of AI agents, which will help us frame the right questions for the effective governance. An overview of which is coming up in the next video. I will see you there.