From the course: Navigating the EU AI Act
Risk management systems
From the course: Navigating the EU AI Act
Risk management systems
- [Narrator] Wherever there's an engineering team within an organization working on cutting-edge technology, there's a risk and compliance team close behind working to understand, categorize, quantify, and eventually mitigate risk appropriately. The European Union's AI Act recognizes the value of risk management to both businesses and consumers, and has defined requirements to ensure teams continue to apply a lens of professional skepticism to novel tools and products. Let's take a look at Chapter 3, Article 9 of the AI Act, and review some of their requirements. First off, the AI Act explicitly states that a risk management system must be established, implemented, documented, and maintained. It's important to note that while this requirement exists, there won't be a single right answer. I'd recommend reviewing the National Institute of Standards and Technology, NIST, AI Risk Management Framework, and the International Organization of Standards 23894 publication. Both documents provide initial guidance for identifying, classifying and managing AI risk. Next, the AI Act defines minimum criteria for what the risk management system should be capable of. On a recurring basis, the system should identify and analyze risk associated with high-risk artificial intelligence. It should be capable of estimating and evaluating the impact and likelihood of risk scenarios, and it should be built to ingest information from post-market monitoring. Lastly, it needs to be capable of supporting the adoption of relevant and timely mitigation measures. Most of these requirements are standard processes built into the compliance function within an organization. I would recommend reviewing your organization's enterprise risk management policy if you'd like to review how this is currently set up for your team. The AI Act also provides some tips for proactively addressing risk that they encourage organizations to adopt. They recommend eliminating or reducing risk through thoughtful design and deployment methods. The Cybersecurity and Infrastructure Security Agency, CISA, published a paper in April, 2023, defining their Secure by Design principles. I recommend reviewing their paper to understand what practices help address this. Lastly, the developer of the AI system should provide necessary training to deployers with the intention of helping to reduce risky behaviors. As you would with any new tool or product, provide instructions to help people use it correctly. There are a few caveats and additional rules within the original text of the AI Act. I recommend reviewing Article 9 in detail if your AI system is considered high-risk. Remember, as mentioned earlier in the AI Act, these requirements are in addition to other regulatory and consumer protection requirements, never in place of them.
Contents
-
-
-
-
(Locked)
Prohibited artificial intelligence practices3m 22s
-
(Locked)
Classification of high-risk AI systems3m 27s
-
(Locked)
Exceptions to the high-risk classification2m 42s
-
Risk management systems2m 57s
-
(Locked)
Data quality and governance3m 6s
-
(Locked)
Technical documentation and recordkeeping2m 58s
-
(Locked)
Transparency requirements3m 3s
-
(Locked)
Human oversight2m 47s
-
(Locked)
Accuracy, robustness, and cybersecurity3m 4s
-
(Locked)
-
-
-
-
-
-