"this toolkit shows you how to identify, monitor and mitigate the ‘hidden’ behavioural and organisational risks associated with AI roll-outs. These are the unintended consequences that can arise from how well-intentioned people, teams and organisations interact with AI solutions. Who is this toolkit for? This toolkit is designed for individuals and teams responsible for implementing AI tools and services within organisations and those involved in AI governance. It is intended to be used once you have identified a clear business need for an AI tool and want to ensure that your tool is set up for success. If an AI solution has already been implemented within your organisation, you can use this toolkit to assess risks posed and design a holistic risk management approach. You can use the Mitigating Hidden AI Risks Toolkit to: • Assess the barriers your target users and organisation may experience to using your tool safely and responsibly • Pre-empt the behavioural and organisational risks that could emerge from scaling your AI tools • Develop robust risk management approaches and mitigation strategies to support users, teams and organisations to use your tool safely and responsibly • Design effective AI safety training programmes for your users • Monitor and evaluate the effectiveness of your risk mitigations to ensure you not only minimise risk, but maximise the positive impact of your tool for your organisation" A very practical guide to behavioural considerations in managing risk by Dr Moira Nicolson and others at the UK Cabinet Office, which builds on the MIT AI Risk Repository.
How to Manage AI Risks in Law Firms
Explore top LinkedIn content from expert professionals.
Summary
Managing AI risks in law firms involves understanding and addressing potential threats like data privacy breaches, shadow AI usage, compliance issues, and ethical challenges. By implementing robust governance, training, and clear policies, law firms can responsibly harness the power of AI while minimizing risks.
- Establish clear AI policies: Develop and communicate guidelines for approved tools, data usage, and output verification to ensure responsible AI use across your legal team.
- Provide ongoing training: Educate all team members, from junior to senior staff, on critical evaluation of AI outputs, focusing on verification, bias detection, and contextual understanding.
- Monitor and manage risks: Regularly assess AI systems for privacy and compliance risks, and adjust policies to align with evolving regulations and best practices.
-
-
A lot of companies think they’re “safe” from AI compliance risks simply because they haven’t formally adopted AI. But that’s a dangerous assumption—and it’s already backfiring for some organizations. Here’s what’s really happening— Employees are quietly using ChatGPT, Claude, Gemini, and other tools to summarize customer data, rewrite client emails, or draft policy documents. In some cases, they’re even uploading sensitive files or legal content to get a “better” response. The organization may not have visibility into any of it. This is what’s called Shadow AI—unauthorized or unsanctioned use of AI tools by employees. Now, here’s what a #GRC professional needs to do about it: 1. Start with Discovery: Use internal surveys, browser activity logs (if available), or device-level monitoring to identify which teams are already using AI tools and for what purposes. No blame—just visibility. 2. Risk Categorization: Document the type of data being processed and match it to its sensitivity. Are they uploading PII? Legal content? Proprietary product info? If so, flag it. 3. Policy Design or Update: Draft an internal AI Use Policy. It doesn’t need to ban tools outright—but it should define: • What tools are approved • What types of data are prohibited • What employees need to do to request new tools 4. Communicate and Train: Employees need to understand not just what they can’t do, but why. Use plain examples to show how uploading files to a public AI model could violate privacy law, leak IP, or introduce bias into decisions. 5. Monitor and Adjust: Once you’ve rolled out your first version of the policy, revisit it every 60–90 days. This field is moving fast—and so should your governance. This can happen anywhere: in education, real estate, logistics, fintech, or nonprofits. You don’t need a team of AI engineers to start building good governance. You just need visibility, structure, and accountability. Let’s stop thinking of AI risk as something “only tech companies” deal with. Shadow AI is already in your workplace—you just haven’t looked yet.
-
⚠️Privacy Risks in AI Management: Lessons from Italy’s DeepSeek Ban⚠️ Italy’s recent ban on #DeepSeek over privacy concerns underscores the need for organizations to integrate stronger data protection measures into their AI Management System (#AIMS), AI Impact Assessment (#AIIA), and AI Risk Assessment (#AIRA). Ensuring compliance with #ISO42001, #ISO42005 (DIS), #ISO23894, and #ISO27701 (DIS) guidelines is now more material than ever. 1. Strengthening AI Management Systems (AIMS) with Privacy Controls 🔑Key Considerations: 🔸ISO 42001 Clause 6.1.2 (AI Risk Assessment): Organizations must integrate privacy risk evaluations into their AI management framework. 🔸ISO 42001 Clause 6.1.4 (AI System Impact Assessment): Requires assessing AI system risks, including personal data exposure and third-party data handling. 🔸ISO 27701 Clause 5.2 (Privacy Policy): Calls for explicit privacy commitments in AI policies to ensure alignment with global data protection laws. 🪛Implementation Example: Establish an AI Data Protection Policy that incorporates ISO27701 guidelines and explicitly defines how AI models handle user data. 2. Enhancing AI Impact Assessments (AIIA) to Address Privacy Risks 🔑Key Considerations: 🔸ISO 42005 Clause 4.7 (Sensitive Use & Impact Thresholds): Mandates defining thresholds for AI systems handling personal data. 🔸ISO 42005 Clause 5.8 (Potential AI System Harms & Benefits): Identifies risks of data misuse, profiling, and unauthorized access. 🔸ISO 27701 Clause A.1.2.6 (Privacy Impact Assessment): Requires documenting how AI systems process personally identifiable information (#PII). 🪛 Implementation Example: Conduct a Privacy Impact Assessment (#PIA) during AI system design to evaluate data collection, retention policies, and user consent mechanisms. 3. Integrating AI Risk Assessments (AIRA) to Mitigate Regulatory Exposure 🔑Key Considerations: 🔸ISO 23894 Clause 6.4.2 (Risk Identification): Calls for AI models to identify and mitigate privacy risks tied to automated decision-making. 🔸ISO 23894 Clause 6.4.4 (Risk Evaluation): Evaluates the consequences of noncompliance with regulations like #GDPR. 🔸ISO 27701 Clause A.1.3.7 (Access, Correction, & Erasure): Ensures AI systems respect user rights to modify or delete their data. 🪛 Implementation Example: Establish compliance audits that review AI data handling practices against evolving regulatory standards. ➡️ Final Thoughts: Governance Can’t Wait The DeepSeek ban is a clear warning that privacy safeguards in AIMS, AIIA, and AIRA aren’t optional. They’re essential for regulatory compliance, stakeholder trust, and business resilience. 🔑 Key actions: ◻️Adopt AI privacy and governance frameworks (ISO42001 & 27701). ◻️Conduct AI impact assessments to preempt regulatory concerns (ISO 42005). ◻️Align risk assessments with global privacy laws (ISO23894 & 27701). Privacy-first AI shouldn't be seen just as a cost of doing business, it’s actually your new competitive advantage.
-
Are you verifying and critically evaluating the output of AI before accepting it? A recent study by Carnegie Mellon University and Microsoft Research that focused on knowledge workers and how they interact with AI-generated content in the workplace found that using AI can lead to diminished critical engagement – but only for certain workers and certain kinds of tasks. ➡️ For routine or lower-stakes tasks, 62% of participants engaged in less critical thinking when using AI. ➡️ Those who had greater confidence in their expertise were 27% more likely to critically assess AI outputs instead of accepting them at face value. “More likely to critically assess” means: 💡 𝐅𝐚𝐜𝐭-𝐜𝐡𝐞𝐜𝐤𝐢𝐧𝐠 𝐀𝐈 𝐨𝐮𝐭𝐩𝐮𝐭𝐬 by cross-referencing external sources. 💡 𝐀𝐧𝐚𝐥𝐲𝐳𝐢𝐧𝐠 𝐛𝐢𝐚𝐬𝐞𝐬 that may be present in AI-generated information. 💡 𝐄𝐝𝐢𝐭𝐢𝐧𝐠 𝐚𝐧𝐝 𝐫𝐞𝐟𝐢𝐧𝐢𝐧𝐠 𝐀𝐈-𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐞𝐝 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 to better align with context and objectives. 💡 𝐔𝐬𝐢𝐧𝐠 𝐀𝐈 𝐚𝐬 𝐚 𝐛𝐫𝐚𝐢𝐧𝐬𝐭𝐨𝐫𝐦𝐢𝐧𝐠 𝐭𝐨𝐨𝐥 rather than a definitive answer generator. Employing less critical thinking meant AI-generated content was copied and used without verification, or relied upon decision-making without questioning logic. In these cases, users were assuming accuracy without contextual understanding. 𝑾𝒉𝒂𝒕 𝒅𝒐𝒆𝒔 𝒕𝒉𝒊𝒔 𝒎𝒆𝒂𝒏? 🚨 Knowledge workers who use AI when junior in their careers, and especially when engaged in lower value work without understanding its context, are more likely to rely on it without verifying or questioning output. 📖 Those who are senior enough to understand the context and have confidence in their own knowledge will verify and check AI output before using it or relying on it. 𝑾𝒉𝒂𝒕 𝒅𝒐𝒆𝒔 𝒊𝒕 𝒎𝒆𝒂𝒏 𝒇𝒐𝒓 𝒕𝒉𝒆 𝒍𝒆𝒈𝒂𝒍 𝒊𝒏𝒅𝒖𝒔𝒕𝒓𝒚? Training and education is more important than ever before. Junior lawyers will be disproportionately more affected by this shift in critical thinking. The fact is they will be using AI for work whether or not your workplace has a policy in place or even whether it has licensed an AI solution. To ensure responsible use of AI, and encourage independent thought in your lawyers: ✅ Provide regular education on why verification, analysis, and refinement of AI output is necessary (and write this into your policies on AI use), ✅ Don’t sleep on lawyer training that reinforces the importance of understanding context and asking good questions, ✅ Train senior lawyers to evaluate junior work more critically, recognizing that AI may have a played a part in its creation. ✅ Encourage supervisors to share context with juniors when instructing them. ✅ Regardless of your seniority, if you are a lawyer or legal professional engaged in routine tasks, remind yourself to remain critically engaged if you're using AI. This applies to small firms or legal departments just as it does to large. Link to study in comments. #law #artificialintelligence #GenAI #lawyers
-
ISO 5338 has key AI risk management considerations useful to security and compliance leaders. It's a non-certifiable standard laying out best practices for the AI system lifecycle. And it’s related to ISO 42001 because control A6 from Annex A specifically mentions ISO 5338. Here are some key things to think about at every stage: INCEPTION -> Why do I need a non-deterministic system? -> What types of data will the system ingest? -> What types of outputs will it create? -> What is the sensitivity of this info? -> Any regulatory requirements? -> Any contractual ones? -> Is this cost-effective? DESIGN AND DEVELOPMENT -> What type of model? Linear regressor? Neural net? -> Does it need to talk to other systems (an agent)? -> What are the consequences of bad outputs? -> What is the source of the training data? -> How / where will data be retained? -> Will there be continuous training? -> Do we need to moderate outputs? -> Is system browsing the internet? VERIFICATION AND VALIDATION -> Confirm system meets business requirements. -> Consider external review (per NIST AI RMF). -> Do red-teaming and penetration testing. -> Do unit, integration, and UA testing DEPLOYMENT -> Would deploying system be within our risk appetite? -> If not, who is signing off? What is the justification? -> Train users and impacted parties. -> Update shared security model. -> Publish documentation. -> Add to asset inventory. OPERATION AND MONITORING -> Do we have a vulnerability disclosure program? -> Do we have a whistleblower portal? -> How are we tracking performance? -> Model drift? CONTINUOUS VALIDATION -> Is the system still meeting our business requirements? -> If there is an incident or vulnerability, what do we do? -> What are our legal disclosure requirements? -> Should we disclose even more? -> Do regular audits. RE-EVALUATION -> Has the system exceeded our risk appetite? -> If an incident, do a root cause analysis. -> Do we need to change policies? -> Revamp procedures? RETIREMENT -> Is there business need to retain model or data? Legal? -> Delete everything we don’t need, including backups. -> Audit the deletion. Are you using ISO 5338 for AI risk management?
-
🚨 “Why Legal Teams Are Pumping the Brakes on AI Adoption – And What Consultants Can Do About It" 🚨 As a consultant working at the intersection of tech and law, I’ve seen firsthand the glaring gap between the promise of AI solutions (including generative AI) and the cautious reality of in-house legal teams. While AI could revolutionize contract review, compliance, and risk management, many legal departments remain skeptical—and their hesitations are far from irrational. Here’s what’s holding them back: 1. "We Can’t Afford a Hallucination Lawsuit" Legal teams live in a world where accuracy is non-negotiable. One AI-generated error (like the fake citations in the Mata v. Avianca case) could mean sanctions, reputational ruin, or regulatory blowback. Until AI tools consistently deliver flawless outputs, “trust but verify” will remain their mantra. 2. "Our Data Isn’t Just Sensitive – It’s Existential" Confidentiality is the lifeblood of legal work. The fear of leaks (remember Samsung’s ChatGPT code breach?) or adversarial hacks makes teams wary of inputting case strategies or client data into AI systems—even “secure” ones. 3. "Bias + Autonomy = Liability Nightmares" Legal ethics demand fairness, but AI’s hidden biases (e.g., flawed sentencing algorithms) and the “black box” nature of agentic AI clash with transparency requirements. As one GC mentioned recently: “How do I explain to a judge that an AI I can’t audit made the call?” 4. "Regulators Are Watching… and We’re in the Crosshairs" With the EU AI Act classifying legal AI as high-risk and global frameworks evolving daily, legal teams fear adopting tools that could become non-compliant overnight. Bridging the Trust Gap: A Consultant’s Playbook To move the needle, consultants must: ✅ Start small: Pilot AI on low-stakes tasks (NDA drafting, doc review) to prove reliability without existential risk. ✅ Demystify the tech: Offer bias audits, explainability frameworks, and clear liability protocols. ✅ Partner, don’t push: Co-design solutions with legal teams—they know their pain points better than anyone. The future isn’t about replacing lawyers with bots; it’s about augmenting human expertise with AI precision. But until we address these fears head-on, adoption will lag behind potential. Thoughts? How are you navigating the AI-legal trust gap?👇 #LegalTech #AIEthics #FutureOfLaw #LegalInnovation #cmclegalstrategies
-
Your employees uploaded confidential data to their personal ChatGPT instance. 🤖 Oops! 💼Now it's immortalized in the AI's memory forever. 🧠 Generative AI is a time-saver, but it comes with risks. So, how do we harness AI without leaking secrets? Introduce an Acceptable Use of AI Policy. Here’s what the policy should cover: 1️⃣ Approved Tools: List what tools employees are allowed to use. Even if you don’t provide a Teams account for the tools, you can still explicitly list which tools you permit employees to use individually. 2️⃣ Data Rules: Define what data can and cannot be entered into AI tools. For example: you might prohibit customer contact information from being input. 3️⃣ Output Handling: All AI tools are quick to remind you that they can be wrong! Provide direct instruction on how employees are expected to fact-check outputs. Banning employees from using AI at work is a foolish decision. By creating a solid policy, you’ll enable and empower employees to find ways to use this time-saving tech, without compromising your security. Read my full article for more info about the risks presented by employee AI use and how to best mitigate them. #AI #cybersecurity #fciso https://lnkd.in/gi9c2sqv
-
😬 Lawyers at Morgan & Morgan got sanctioned in late February for citing non-existent cases generated by their in-house AI database. Of the nine cases cited in the motions, eight were non-existent. ⚒️ This isn't a lesson just about checking case citations— 🚨it's what happens when generative AI implementation goes bad. For in-house counsel 💼, the deeper issue goes beyond fact-checking. It's about how organizations roll out AI tools without proper guardrails. The attorneys admitted it was their first time using the system and didn't verify anything before filing with the court. What can in-house legal teams learn from this? 🧠 1️⃣ AI tools aren't second nature to use. Companies need clear intention behind the how, what, and why of generative AI usage. 2️⃣ If there's a risk someone will use it badly... expect it will happen. Work backwards with training to be proactive. ⚠️ 3️⃣ Change management isn't just for the "innovation team." Legal needs to be involved to spot compliance and operational challenges. 4️⃣ Learning AI fundamentals shouldn't be optional. You wouldn't supervise someone in an area where you lack knowledge—why treat AI differently? 🤔 5️⃣ If AI saved you time creating content, you HAVE enough time to check the sources. Many models can even organize source information in a chart for easier review. ⏱️ 🔖 Bookmark this for later. #InHouseCounsel #GenerativeAI #LegalTech #AIForLawyers #unboxingaiforlawyers #LegalInnovation #LawyersAndAI #AIAdoption #AIinLaw #generalcounsel