Check out this massive global research study into the use of generative AI involving over 48,000 people in 47 countries - excellent work by KPMG and the University of Melbourne! Key findings: 𝗖𝘂𝗿𝗿𝗲𝗻𝘁 𝗚𝗲𝗻 𝗔𝗜 𝗔𝗱𝗼𝗽𝘁𝗶𝗼𝗻 - 58% of employees intentionally use AI regularly at work (31% weekly/daily) - General-purpose generative AI tools are most common (73% of AI users) - 70% use free public AI tools vs. 42% using employer-provided options - Only 41% of organizations have any policy on generative AI use 𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗥𝗶𝘀𝗸 𝗟𝗮𝗻𝗱𝘀𝗰𝗮𝗽𝗲 - 50% of employees admit uploading sensitive company data to public AI - 57% avoid revealing when they use AI or present AI content as their own - 66% rely on AI outputs without critical evaluation - 56% report making mistakes due to AI use 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝘃𝘀. 𝗖𝗼𝗻𝗰𝗲𝗿𝗻𝘀 - Most report performance benefits: efficiency, quality, innovation - But AI creates mixed impacts on workload, stress, and human collaboration - Half use AI instead of collaborating with colleagues - 40% sometimes feel they cannot complete work without AI help 𝗧𝗵𝗲 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗚𝗮𝗽 - Only half of organizations offer AI training or responsible use policies - 55% feel adequate safeguards exist for responsible AI use - AI literacy is the strongest predictor of both use and critical engagement 𝗚𝗹𝗼𝗯𝗮𝗹 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 - Countries like India, China, and Nigeria lead global AI adoption - Emerging economies report higher rates of AI literacy (64% vs. 46%) 𝗖𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗟𝗲𝗮𝗱𝗲𝗿𝘀 - Do you have clear policies on appropriate generative AI use? - How are you supporting transparent disclosure of AI use? - What safeguards exist to prevent sensitive data leakage to public AI tools? - Are you providing adequate training on responsible AI use? - How do you balance AI efficiency with maintaining human collaboration? 𝗔𝗰𝘁𝗶𝗼𝗻 𝗜𝘁𝗲𝗺𝘀 𝗳𝗼𝗿 𝗢𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝘀 - Develop clear generative AI policies and governance frameworks - Invest in AI literacy training focusing on responsible use - Create psychological safety for transparent AI use disclosure - Implement monitoring systems for sensitive data protection - Proactively design workflows that preserve human connection and collaboration 𝗔𝗰𝘁𝗶𝗼𝗻 𝗜𝘁𝗲𝗺𝘀 𝗳𝗼𝗿 𝗜𝗻𝗱𝗶𝘃𝗶𝗱𝘂𝗮𝗹𝘀 - Critically evaluate all AI outputs before using them - Be transparent about your AI tool usage - Learn your organization's AI policies and follow them (if they exist!) - Balance AI efficiency with maintaining your unique human skills You can find the full report here: https://lnkd.in/emvjQnxa All of this is a heavy focus for me within Advisory (AI literacy/fluency, AI policies, responsible & effective use, etc.). Let me know if you'd like to connect and discuss. 🙏 #GenerativeAI #WorkplaceTrends #AIGovernance #DigitalTransformation
How to Reduce Generative AI Risks in Organizations
Explore top LinkedIn content from expert professionals.
Summary
Reducing generative AI risks in organizations involves putting safeguards in place to manage how AI tools are used, prevent sensitive data exposure, and ensure trustworthy outcomes. Generative AI refers to technology that can create new content, such as text or images, based on patterns it learns from existing data, which brings unique risks like data leaks, misinformation, and loss of accountability.
- Set clear policies: Develop and communicate guidelines about what data can be shared with AI tools and which activities are permitted so employees know how to use AI safely.
- Invest in education: Teach your team about the strengths and limits of generative AI, including training for responsible use and understanding potential threats such as data leaks or inaccurate outputs.
- Strengthen data governance: Build systems to track, classify, and safeguard company data so you can confidently trace where information originated and ensure that AI-generated results are reliable and secure.
-
-
As organizations transition from pilots to enterprise-wide deployment of Generative and Agentic AI, it's crucial to recognize that GAI risks differ significantly from traditional software risks. Towards that, it is important to go back to basics and this publication from 2024 by National Institute of Standards and Technology (NIST)'s Generative AI Profile does a great job! 🌐 Here are the four highest-impact risks and the mitigation actions every organization should implement:- 1. Systemic Risk: Algorithmic Monocultures & Ecosystem-Level Failures When multiple industries depend on the same foundation models, a single unexpected model behavior can lead to correlated failures across the ecosystem. ⚡ Mitigation: - - Build model diversity and avoid single-model dependencies. - Maintain fallback systems and contingency workflows. - Apply stress tests that simulate sector-wide shocks. 2. Human-Originating Risks (Misuse, Over-Trust, Manipulation) Many GAI incidents stem from human behavior, including misuse, over-reliance, indirect prompt injection, and flawed assumptions. ⚡ Mitigation:- - Implement continuous user education on limitations and safe use. - Enforce access controls, privilege separation, and plugin vetting. - Maintain audit trails and logging to identify misuse early. 3. Content Integrity Risks (Hallucinations, Synthetic Media, Provenance Failure) GAI increases the scale and believability of fabricated content, from medical misinformation to deepfake-enabled harms. ⚡ Mitigation:- - Invest in content provenance, watermarking, and metadata tracking. - Require pre-deployment testing for hallucination profiles across contexts. - Use cross-model verification before high-stakes outputs are acted upon. 4. Security Risks (Prompt Injection, Data Leakage, Model Extraction) NIST highlights increasingly sophisticated attack surfaces unique to LLMs: indirect prompt injection, data extraction, and plugin-initiated compromise. ⚡ Mitigation:- - Apply secure-by-design reviews for all LLM integration points. - Red-team regularly using GAI-specific attack methods. - Log inputs/outputs via incident-ready documentation so breaches can be traced. 🔐 The bottom line:- AI risk management is not a technical afterthought, it is now a core capability. Organizations that operationalize governance, provenance, testing, and incident disclosure (NIST’s four focus pillars) will be the ones that deploy AI safely and at scale. 💬 If you’d like to explore Gen AI and Agentic AI risks, practical mitigation strategies, or how to operationalize the NIST AI RMF for your organization, feel free to comment or DM. Let’s build safer AI systems together! #AI #GenAI #AIGovernance #NIST #AIRMF #RiskManagement #AITrust #ResponsibleAI #AILeadership
-
AI can generate information that sounds accurate but is completely wrong. AI hallucinations can undermine trust in reporting, introduce compliance exposure, and create financial or operational losses. They can also surface sensitive data or misinform decisions that affect capital allocation, investor communication, and audit readiness. AI hallucinations are not a signal to slow down innovation. They are a signal to strengthen your governance and controls. With a thoughtful risk management approach, leaders can understand uncertainty and build a more confident, resilient AI strategy. Considerations for leaders to reduce AI hallucination risk: 1. Create a validation and review process for AI generated financial outputs. Leaders must ensure that any AI generated forecasts, variance analyses, reconciliations, or narrative summaries have structured validation for source accuracy and logic. 2. Strengthen compliance and regulatory controls within AI workflows. AI hallucinations can create errors that lead to noncompliance and regulatory exposure. Leaders can embed compliance checkpoints into AI driven processes to avoid misstatements, inaccurate filings, or unintended disclosure. 3. Prioritize data governance using high quality, company specific data to reduce the risk of fabricated or inaccurate outputs. This is critical for forecasting, scenario modeling, and automated reporting. 4. Use retrieval augmented generation and automated reasoning for workflows. Pairing these methods anchors AI generated analysis in verified data sources rather than probability-based guesses. 5. Enable filtering and moderation tools to block misleading or irrelevant results. Teams cannot work from flawed or unverified outputs. Filters help prevent misleading content from entering critical workflows or influencing decisions. AI is gaining traction. Now is the time to formalize your AI risk mitigation approach. Start the discussion within your leadership team today. Identify where AI is already influencing decision-making, assess your current controls, and define the safeguards you need next. #RiskManagement #AI #Leaders
-
Your trade secrets just walked out the front door … and you might have held it open. No employee—except the rare bad actor—means to leak sensitive company data. But it happens, especially when people are using generative AI tools like ChatGPT to “polish a proposal,” “summarize a contract,” or “write code faster.” But here’s the problem: unless you’re using ChatGPT Team or Enterprise, it doesn’t treat your data as confidential. According to OpenAI’s own Terms of Use: “We do not use Content that you provide to or receive from our API to develop or improve our Services.” But don‘t forget to read the fine print: that protection does not apply unless you’re on a business plan. For regular users, ChatGPT can use your prompts, including anything you type or upload, to train its large language models. Translation: That “confidential strategy doc” you asked ChatGPT to summarize? That “internal pricing sheet” you wanted to reword for a client? That “source code” you needed help debugging? ☠️ Poof. Trade secret status, gone. ☠️ If you don’t take reasonable measures to maintain the secrecy of your trade secrets, they will lose their protection as such. So how do you protect your business? 1. Write an AI Acceptable Use Policy. Be explicit: what’s allowed, what’s off limits, and what’s confidential. 2. Educate employees. Most folks don’t realize that ChatGPT isn’t a secure sandbox. Make sure they do. 3. Control tool access. Invest in an enterprise solution with confidentiality protections. 4. Audit and enforce. Treat ChatGPT the way you treat Dropbox or Google Drive, as tools that can leak data if unmanaged. 5. Update your confidentiality and trade secret agreements. Include restrictions on AI disclosures. AI isn’t going anywhere. The companies that get ahead of its risk will be the ones still standing when the dust settles. If you don’t have an AI policy and a plan to protect your data, you’re not just behind—you’re exposed.
-
Many executive teams are treating AI governance as something new. New committees. New AI policies. New risk frameworks. The reality: If your data governance is weak, your AI governance is performative. AI governance isn’t a separate program. It is the direct expression of your data governance maturity. And the organizations pulling ahead understand that. 1/ You Cannot Govern What You Cannot Trace AI amplifies the foundation it sits on. If your data is: → Fragmented → Poorly classified → Inconsistently defined → Lacking lineage visibility Your AI outputs will be: → Hard to explain → Difficult to audit → Risky to scale If you cannot trace where data originated, how it was transformed, and who owns it, you cannot credibly govern AI built on top of it. 2/ Data Ownership Determines AI Accountability AI governance often focuses on bias and oversight. But accountability starts earlier. → Who owns the data feeding the model? → Who defines quality thresholds? → Who approves usage rights? If those answers are unclear, AI accountability will be too. Clear data ownership creates clear AI accountability. 3/ Governance Must Move From Documentation to Execution Policy-heavy governance collapses under AI velocity. Leading organizations embed: → Automated classification → Real-time lineage tracking → System-enforced access controls → Policy execution within workflows Governance must operate in the system. 4/ Unification Reduces Hidden Risk When data definitions differ across business units, outputs become inconsistent. When systems are fragmented, risk visibility becomes partial. Unifying definitions, taxonomies, and metadata reduces hidden risk and accelerates deployment. 5/ AI-Specific Controls Only Work on a Strong DG Foundation With mature DG, AI governance becomes achievable: → Human-in-the-loop review for regulated decisions → Bias and drift monitoring → Model performance tracking → Audit trails linking outputs to source data Without strong DG, these controls are cosmetic. 6/ Trust Is Built on Data Discipline AI adoption is fundamentally a trust issue. Employees won’t rely on outputs they can’t explain. Boards won’t scale what they can’t see. Data governance builds: → Accuracy → Transparency → Reproducibility Trust is a structural outcome of disciplined governance. 7/ Governance Maturity Drives Risk-Adjusted Speed Governance is often treated as a cost center. But governance maturity determines AI velocity. Organizations with strong DG can: → Deploy AI faster → Scale it safely → Withstand scrutiny → Respond quickly to issues Their innovation is not just faster; it’s safer. Instead of asking: “Do we have AI governance?” Ask: “Is our data governance mature enough to support AI at scale?” Save this for future reference.
-
A lot of companies think they’re “safe” from AI compliance risks simply because they haven’t formally adopted AI. But that’s a dangerous assumption—and it’s already backfiring for some organizations. Here’s what’s really happening— Employees are quietly using ChatGPT, Claude, Gemini, and other tools to summarize customer data, rewrite client emails, or draft policy documents. In some cases, they’re even uploading sensitive files or legal content to get a “better” response. The organization may not have visibility into any of it. This is what’s called Shadow AI—unauthorized or unsanctioned use of AI tools by employees. Now, here’s what a #GRC professional needs to do about it: 1. Start with Discovery: Use internal surveys, browser activity logs (if available), or device-level monitoring to identify which teams are already using AI tools and for what purposes. No blame—just visibility. 2. Risk Categorization: Document the type of data being processed and match it to its sensitivity. Are they uploading PII? Legal content? Proprietary product info? If so, flag it. 3. Policy Design or Update: Draft an internal AI Use Policy. It doesn’t need to ban tools outright—but it should define: • What tools are approved • What types of data are prohibited • What employees need to do to request new tools 4. Communicate and Train: Employees need to understand not just what they can’t do, but why. Use plain examples to show how uploading files to a public AI model could violate privacy law, leak IP, or introduce bias into decisions. 5. Monitor and Adjust: Once you’ve rolled out your first version of the policy, revisit it every 60–90 days. This field is moving fast—and so should your governance. This can happen anywhere: in education, real estate, logistics, fintech, or nonprofits. You don’t need a team of AI engineers to start building good governance. You just need visibility, structure, and accountability. Let’s stop thinking of AI risk as something “only tech companies” deal with. Shadow AI is already in your workplace—you just haven’t looked yet.
-
🚨 Shadow AI is already inside many organizations - and it’s rewriting their risk profiles in real time. As organizations increasingly integrate generative AI into their workflows, a new governance challenge has emerged: Shadow AI. Employees across industries are bypassing sanctioned systems to experiment with external AI platforms - often with good intentions, but with serious consequences. From vibe coding to resume screening, these unsanctioned deployments are reshaping enterprise risk profiles in real time. Yet for many organizations, Shadow AI remains a massive blindspot - one that quietly erodes compliance, security, and accountability, and is poised to trigger significant operational and regulatory fallout if left unaddressed. Drawing on recent insights from KPMG and Cloud Security Alliance, my latest "Legal in the Loop" Substack post delivers detailed, actionable strategies for AI governance professionals confronting the rapidly evolving challenges posed by Shadow AI, including the following tips: 🔹 Codify Acceptable Use - Then Operationalize It 🔹 Build an “Internal AI AppStore” - Curated, Secure, Role-Based 🔹 Stand Up an AI Labs Function - Safe Sandboxes for Innovation 🔹 Establish Cross-Functional Oversight 🔹 Deploy AI-Specific Security Controls - Beyond Traditional DLP 🔹 Educate and Empower - Not Just Train 🔹 Inventory and Continuously Monitor AI Tools 🔹 Beware the AI You Didn’t Know You Adopted - Embedded AI in Existing Vendor Tools 🔗 Read the full post here: https://lnkd.in/ezZvkku9 #shadowai #legalintheloop #responsibleai #aigovernance #airisk
-
Generative AI is transforming the way organizations operate, but how can product managers and business leaders ensure its responsible use? A new UC Berkeley playbook from Feb 4, 2025, "Responsible Use of Generative AI: A Playbook for Product Managers & Business Leaders", developed by researchers from University of California, Berkeley - Berkeley AI Research Lab’s Responsible AI Initiative, Stanford University, and the University of Oxford (Genevieve Smith Natalia Luka Merrick Osborne Brian Lattimore, MBA Jessica Newman Brandie Nonnecke, PhD Prof Brent Mittelstadt with support from Google, offers a practical framework to embed AI responsibility into day-to-day product development. * * * The Playbook is based on findings in the study "Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices" (see: https://lnkd.in/g8Fua4sA) from January 2025 which analyzed 25 interviews and a survey of 300 PMs. The study identified 5 key challenges in responsible GenAI use: 1) Uncertainty Around Responsibility – 77% of PMs are unclear on what "responsibility" means in AI. 2) Diffusion of Responsibility – Many assume AI ethics or security teams handle risks, leading to inaction. 3) Lack of Incentives – Only 19% have clear incentives for responsible AI; speed-to-market takes priority. 4) Impact of Leadership Buy-In – Organizations with AI principles and leadership support are 4x more likely to have AI responsibility teams and 2.5x more likely to implement safeguards. 5) Micro-Level Ethical Actions – In the absence of mandates, PMs take small, low-risk steps to align AI with responsible practices. * * * The playbook presents 10 actionable "plays" for implementing responsible GenAI by mitigating 5 key risks: Data Privacy, Transparency, Inaccuracy & Hallucinations, Bias, and Security: >> 5 Organizational Leadership Plays – Focusing on company-wide AI governance, policy, and accountability >> 5 Product Manager Plays – Providing practical steps for AI-driven product development: See screenshot below, or p. 25 of the Playbook! * * * For each of the plays, the playbook provides structured guidance covering key areas to support responsible GenAI adoption, which includes: - Objective: The core goal of the play. - Business Benefits: How implementing this play helps mitigate risks, enhance trust, and align with organizational values. - Implementation Steps: A step-by-step guide on how to put the play into action. - Who is Involved: Identifies key stakeholders responsible for execution. - Case Study or Example: Real-world applications showing how organizations have successfully implemented the play. - Additional Resources: References, best practices, and external frameworks to deepen understanding and inform decision-making * * * Read the full playbook here: https://lnkd.in/gUgFKpzD
-
If your work touches AI Governance, you are likely thinking about integrating "unacceptable risks" in your risk management workflows. Especially with the focus on this phrase by the European Union. This week, I read latest insights from Center for Long-Term Cybersecurity at the University of California, Berkeley on intolerable risks (link in comment). Here's how I translate them for an organization: 👉 Autonomy Risks – AI taking self-directed actions in finance, security, or infrastructure without human oversight. 𝐀𝐜𝐭𝐢𝐨𝐧: Implement autonomy constraints at the system level—AI should require multi-factor human validation before executing actions with financial, operational, or security impact. 👉 Manipulation & Deception – AI persuading, deceiving, or altering responses to evade detection. 𝐀𝐜𝐭𝐢𝐨𝐧: Deploy adversarial testing to identify whether models adjust behavior based on context (e.g., evaluation vs. deployment). Introduce truthfulness calibration by cross-referencing AI-generated content with trusted data sources. 👉 Toxicity & Bias – AI generating discriminatory, illegal, or high-risk content. 𝐀𝐜𝐭𝐢𝐨𝐧: Implement automated bias audits at the inference layer, with a requirement that flagged outputs are reviewed by a domain-specific oversight non-AI team. 👉 CBRN & Cyber Risks – AI can lower barriers to bioweapon knowledge or automate cyberattacks. 𝐀𝐜𝐭𝐢𝐨𝐧: Conduct dual-use risk assessments during AI model development, flagging capabilities that exceed human expert benchmarks. Introduce real-time anomaly detection for AI-driven cyber threats. 👉 Socioeconomic Disruption – AI accelerating job displacement, financial instability, or systemic bias. 𝐀𝐜𝐭𝐢𝐨𝐧: Incorporate labor impact assessments into AI rollout plans—quantify automation risks and mandate compensatory workforce upskilling before scaling AI deployments. #AIGovernance (image credit: Forvis Mazars Group)
-
As Chief Information Security Officers (CISOs), we're entrusted with safeguarding our organizations in the ever-evolving digital landscape. Today, a new frontier beckons - Generative AI. This powerful technology has incredible potential but presents unique challenges for risk management and governance. Generative AI: A Double-Edged Sword Generative AI can create content, from text to images, with astounding accuracy. While this fuels innovation, it also fuels cyber threats: Deepfakes: Convincing AI-generated deepfakes can deceive even the most discerning eye. Advanced Phishing: Cybercriminals use AI to craft sophisticated, personalized phishing attacks. AI-Generated Malware: New strains of malware are born from AI algorithms. Balancing Act: We must find the equilibrium between security and leveraging AI for legitimate purposes. Ethics and Privacy: Ethical considerations in AI governance are paramount. Our Way Forward: Advanced Defense: Implement cutting-edge threat detection to combat AI-generated threats. Education: Invest in the education and training of our teams to tackle AI challenges effectively. Ethical Guidelines: Develop ethical guidelines to navigate AI use responsibly. Collaboration: Join hands with peers and AI ethics communities to share insights and strategies. Regulatory Adherence:, Stay informed and compliant with evolving AI regulations and data privacy standards. As CISOs, we rise to the occasion, adapting to the ever-changing digital landscape. Generative AI governance is our new frontier, and together, we'll navigate its challenges, ensuring a secure and ethical digital future. #CISO #Cybersecurity #GenerativeAI #AIrisks #Ethics #Privacy #Deepfakes #Phishing #AIinBusiness