From the course: Preparing for the EU AI Act: A Conversation with Jon Adams

A risk-based approach to regulation

- I think that's a really interesting component that I'm learning now as you're describing it to me, that, and correct me if I'm wrong, the main driving force here is how it's affecting people and you keep using the word risks. So whether something is good or bad or needs to be regulated, most of their attention is on what risk it brings to the public. - Right, exactly. And also, like one thing to think about, right, is how it's constructed. It also assigns obligations to different parties within the value chain, right? So you can imagine that if you are building a high risk AI system that can be used for a variety of different things, as the person that's building the system that has an idea what the models are that are involved, that has an idea of how the system's been built, you have a variety of different obligations related to transparency, that risk management, like data quality and insurance, et cetera. And I mean, that's actually a significant part of the AI Act is outlying what those requirements are. And conversely, like, if you are then a customer who uses the product of that AI developer, your obligations are different. You have fewer obligations, and many of them are, you know, keyed to sort of the second, you know, sort of follow on effects of the provider obligations. Like, if the AI developer has to do all these different things related to risk management and create, you know, notes explaining how to use the AI appropriately, the person or organization that's deploying the AI then has to basically follow those instructions, do what the instructions say. So it's really value chain driven in terms of that risk management.

Contents