The Hardest Question an AI Regulator Asked Me... (Principal-Agent problem)
I had a fascinating conversation with an AI regulator last year who asked:
"How do we tell our covered entities what to do without it seeming like just more compliance?"
It took me the better part of a year to figure out a solid response.
The regulator's question perfectly captures the "Principal-Agent problem" in AI regulation—a disconnect between the regulators' goals (managing societal risk) and the covered entities' perception of them (a compliance burden).
After meeting with regulators from a half-dozen countries, here's the landscape I see emerging...
🅃HE 🄶OOD 🄽EWS:
G1. Regulators are genuinely eager to learn and get this right.
G2. Unlike with past technologies, they aren't playing catch-up to nearly the same degree; the gap between them and industry is much smaller.
G3. We don't have to start from scratch. Many existing laws, regulations, and standards can be extrapolated to apply to AI.
🅃HE 🄽OT-🅂O-🄶OOD 🄽EWS:
NG1. Both regulators and their covered entities still have much to learn about effective, scalable controls for AI risk.
NG2. They face a "Goldilocks" dilemma: guidance can't be so broad it's useless, or so specific it's often irrelevant.
NG3. Many still operate under the mirage that more controls automatically equates to greater risk mitigation.
🅁ECOMMENDATIONS
So, how do we build on the positives and solve the challenges? Shift from a static checklist mindset to a dynamic system for genuine risk mitigation. Below are a few recommendations, with each subsequent one, we get closer to reconciling the Principal-Agent problem
Recommended by LinkedIn
𝐑𝟏. 𝐒𝐞𝐧𝐬𝐢𝐛𝐥𝐲 𝐢𝐧𝐜𝐨𝐫𝐩𝐨𝐫𝐚𝐭𝐞 𝟑𝐫𝐝 𝐩𝐚𝐫𝐭𝐢𝐞𝐬. Any guidance must include an entity's 3rd parties. Why? Because critical (12) components of any end-to-end AI system—from data sources to model hosting—are often managed by external vendors. You can't manage risk if you're ignoring a huge piece of the AI system.
𝐑𝟐. 𝐏𝐫𝐢𝐨𝐫𝐢𝐭𝐢𝐳𝐞 𝐫𝐢𝐬𝐤𝐬 𝐨𝐯𝐞𝐫 𝐜𝐨𝐧𝐭𝐫𝐨𝐥𝐬. This directly tackles the "more is better" mirage. The key is to first identify the specific risks that matter (frameworks like DASF ( Databricks AI Security Framework) enumerate 62 AI risks) and then apply only the controls that demonstrably mitigate those risks (#DASF maps 62 risks to 64 controls).
𝐑𝟑. 𝐃𝐞𝐬𝐢𝐠𝐧 𝐟𝐨𝐫 𝐬𝐜𝐚𝐥𝐚𝐛𝐥𝐞 𝐨𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬. This builds directly on R2. If the "right" controls can't be practically deployed and managed at scale, they will be relegated to "compliance theater"—activities that look good on paper but don't actually reduce risk and can even increase risk by perpetuating Shadow AI - see my LinkedIn post from last month re: AI Security Doom Loop. The focus must be on operational reality, not theoretical perfection.
𝐑𝟒. 𝐌𝐨𝐧𝐢𝐭𝐨𝐫 𝐞𝐟𝐟𝐢𝐜𝐚𝐜𝐲 𝐨𝐟 𝐠𝐮𝐢𝐝𝐚𝐧𝐜𝐞. Regulations shouldn't be a one-dimensional list of controls. They need a mechanism to ensure controls are actually working. This means explicitly assessing the extent to which controls will mitigate risks (like HITRUST 'sPRISMA model). Bonus points for integrating a quantitative risk measurement approach like #FAIR (Factor Analysis of Information Risk).
𝐑𝟓. 𝐄𝐦𝐛𝐫𝐚𝐜𝐞 𝐢𝐦𝐩𝐞𝐫𝐟𝐞𝐜𝐭𝐢𝐨𝐧 & 𝐩𝐫𝐮𝐧𝐞. Let's be honest: 20-40% of any new guidance will likely miss the mark. Regulators should communicate this upfront and build a mechanism for rapid learning. The most powerful step? Actively removing controls that show no / low efficacy. This single act proves the goal isn't compliance for its own sake, but a shared interest in effective risk management. It's the ultimate answer to that regulator's tough question that we started this post with.
This approach turns compliance from an adversarial burden into a helpful mitigator of actual risks.
Which of the above do you think I should emphasize most for your industry? What am I missing?
Omar Khawaja dont you think that If regulators want safer AI, it makes more sense to mandate some simple metrics?? time to detect AI-caused incidents, time to rollback, guardrail coverage, and red-team frequency. and then publish the scores. i believe markets move faster when risk is quantified, not by just growing the paperwork. Over-regulation is killing the market in my opinion. I would always prefer to mandate public AI-related incident disclosures, post-mortems will drive continuous improvement especially when we dont fully understand AI and when new models are released weekly!
💯Omar Khawaja - How to drive (AI) with gas and breaks: 𝐏𝐫𝐢𝐨𝐫𝐢𝐭𝐢𝐳𝐞 𝐫𝐢𝐬𝐤𝐬 𝐨𝐯𝐞𝐫 𝐜𝐨𝐧𝐭𝐫𝐨𝐥𝐬. “The key is to first identify the specific risks that matter (frameworks like DASF (Databricks AI Security Framework) enumerate 62 AI risks) and then apply only the controls that demonstrably mitigate those risks.
That’s a really sharp way to frame it the principal-agent problem explains so much of the tension in AI regulation. Bridging that perception gap feels like the real challenge. Omar Khawaja
Omar is on top his game as usual.