AI is not failing because of bad ideas; it’s "failing" at enterprise scale because of two big gaps: 👉 Workforce Preparation 👉 Data Security for AI While I speak globally on both topics in depth, today I want to educate us on what it takes to secure data for AI—because 70–82% of AI projects pause or get cancelled at POC/MVP stage (source: #Gartner, #MIT). Why? One of the biggest reasons is a lack of readiness at the data layer. So let’s make it simple - there are 7 phases to securing data for AI—and each phase has direct business risk if ignored. 🔹 Phase 1: Data Sourcing Security - Validating the origin, ownership, and licensing rights of all ingested data. Why It Matters: You can’t build scalable AI with data you don’t own or can’t trace. 🔹 Phase 2: Data Infrastructure Security - Ensuring data warehouses, lakes, and pipelines that support your AI models are hardened and access-controlled. Why It Matters: Unsecured data environments are easy targets for bad actors making you exposed to data breaches, IP theft, and model poisoning. 🔹 Phase 3: Data In-Transit Security - Protecting data as it moves across internal or external systems, especially between cloud, APIs, and vendors. Why It Matters: Intercepted training data = compromised models. Think of it as shipping cash across town in an armored truck—or on a bicycle—your choice. 🔹 Phase 4: API Security for Foundational Models - Safeguarding the APIs you use to connect with LLMs and third-party GenAI platforms (OpenAI, Anthropic, etc.). Why It Matters: Unmonitored API calls can leak sensitive data into public models or expose internal IP. This isn’t just tech debt. It’s reputational and regulatory risk. 🔹 Phase 5: Foundational Model Protection - Defending your proprietary models and fine-tunes from external inference, theft, or malicious querying. Why It Matters: Prompt injection attacks are real. And your enterprise-trained model? It’s a business asset. You lock your office at night—do the same with your models. 🔹 Phase 6: Incident Response for AI Data Breaches - Having predefined protocols for breaches, hallucinations, or AI-generated harm—who’s notified, who investigates, how damage is mitigated. Why It Matters: AI-related incidents are happening. Legal needs response plans. Cyber needs escalation tiers. 🔹 Phase 7: CI/CD for Models (with Security Hooks) - Continuous integration and delivery pipelines for models, embedded with testing, governance, and version-control protocols. Why It Matter: Shipping models like software means risk comes faster—and so must detection. Governance must be baked into every deployment sprint. Want your AI strategy to succeed past MVP? Focus and lock down the data. #AI #DataSecurity #AILeadership #Cybersecurity #FutureOfWork #ResponsibleAI #SolRashidi #Data #Leadership
Common AI Security Risks to Consider
Explore top LinkedIn content from expert professionals.
Summary
Artificial intelligence (AI) has transformed industries, but it also introduces unique security challenges. Organizations should prioritize identifying and addressing common AI security risks to safeguard data, maintain trust, and ensure compliance with legal standards.
- Implement robust data protection: Secure the data lifecycle by validating data sources, encrypting data at all stages, and applying strict access controls to minimize the risk of breaches or misuse.
- Strengthen API and model security: Protect your APIs and AI models against threats like unauthorized access, data leaks, and malicious attacks by employing thorough testing and monitoring.
- Establish clear governance and response protocols: Develop incident response plans and monitoring systems to handle AI-related breaches, data drift, and unsafe behavior, ensuring ongoing security and compliance.
-
-
I’m often asked for feedback on startups focused on securing Agentic AI. While these targeted solutions have their place, agent security is far too complex and nuanced to be solved by any single product or silver bullet. Beyond existing infrastructure and model-related risks, agents add new risks, which I group into three broad categories: 1. Risks from attack surface expansion: Agentic systems require broad access to APIs, cloud infrastructure, databases, and code execution environments, increasing the attack surface. MCP, which standardizes how agents access tools, memory, and external context, introduces a new kind of attack surface in its own right. Since agents take on human tasks, they inherit identity challenges like authentication and access control, along with new ones such as being short-lived and lacking verifiable identities. 2. Risks from agent autonomy: By design, autonomous agents make decisions independently without human oversight. Lack of transparency into an agent's internal reasoning turns agentic systems into black boxes, making it difficult to predict or understand why a particular course of action was chosen.This can lead to unpredictable behavior, unsafe optimizations, and cascading failures, where a single hallucination or flawed inference can snowball across agents and make traceability difficult. 3. Risks that come from poorly defined objectives: When objectives or boundaries are poorly defined by humans, even a technically perfect agent can cause problems. Misunderstood instructions can lead to unsafe behaviors, buggy or insecure code. In practice, the biggest challenge for teams building agents is opening the black box and understanding how the agent thinks, so they can help it behave more consistently and course-correct as needed. This requires strong context engineering to shape inputs, prompts, and environments, rather than relying on third-party tools that face the same visibility issues. Additionally, custom, context-aware guardrails that are tightly integrated into the agent's core logic are needed to prevent undesirable outcomes. No external product can prevent an agent from doing the wrong thing simply because it misunderstood a vague instruction. That can only be prevented by proper design, rigorous testing, and extensive offline experimentation before deployment. Of course, that’s not to say third-party AI/agentic AI security solutions aren’t useful. Paired with traditional controls across infrastructure, data, and models, they can partially address the first category of risk. For example, AI agent authentication/authorization to manage the lifecycle and permissions of agentic identities, and granular permissions for tools are good use cases for agentic AI security solutions. Penetration testing is another highly productive use of external tools to detect unauthorized access, prompt and tool injection, data and secrets leakage. #innovation #technology #artificialintelligence #machinelearning #AI
-
The Cybersecurity and Infrastructure Security Agency together with the National Security Agency, the Federal Bureau of Investigation (FBI), the National Cyber Security Centre, and other international organizations, published this advisory providing recommendations for organizations in how to protect the integrity, confidentiality, and availability of the data used to train and operate #artificialintelligence. The advisory focuses on three main risk areas: 1. Data #supplychain threats: Including compromised third-party data, poisoning of datasets, and lack of provenance verification. 2. Maliciously modified data: Covering adversarial #machinelearning, statistical bias, metadata manipulation, and unauthorized duplication. 3. Data drift: The gradual degradation of model performance due to changes in real-world data inputs over time. The best practices recommended include: - Tracking data provenance and applying cryptographic controls such as digital signatures and secure hashes. - Encrypting data at rest, in transit, and during processing—especially sensitive or mission-critical information. - Implementing strict access controls and classification protocols based on data sensitivity. - Applying privacy-preserving techniques such as data masking, differential #privacy, and federated learning. - Regularly auditing datasets and metadata, conducting anomaly detection, and mitigating statistical bias. - Securely deleting obsolete data and continuously assessing #datasecurity risks. This is a helpful roadmap for any organization deploying #AI, especially those working with limited internal resources or relying on third-party data.
-
We’re proud to share the release of v1.1 of the Draft Critical AI Security Guidelines. Most orgs still have no guardrails around internal AI use. No access controls. No monitoring. No visibility. This is the core problem. AI is being wired into detection, enforcement, and decision-making. The models must be defended. If someone tampers with your model, they control the outcome. That’s today’s business and security risk. The guidelines offer practical steps to lock things down. The critical categories of #generativeAI security considerations covers: → Access Controls → Data Protection → Deployment Strategies → Inference Security → Monitoring → Governance, Risk, Compliance (GRC) Download now: https://lnkd.in/dTPH2E2F The contributing authors include (tell them thanks) Sarthak Agrawal Matt Bromiley, Brett D. Arion, CISSP, Ahmed AbuGharbia, Ron F. Del Rosario, Mick Douglas, David Hoelzer, Ken Huang, CISSP, Bhavin P. Kapadia, Seth Misenar, Helen Oakley, Jorge Orchilles, Jason Ross, Rakshith Shetty, James S., Jochen Staengler, Rob van der Veer, Jason Vest, Eoin Wickens and Sounil Yu. → Please repost to help get this resource into more hands! SANS Institute #AISecurity #DFIR #Cybersecurity #OpenSource #RiskManagement #LLM #SecurityLeadership #CISO #CTO
-
I was at Hugging Face during the critical year before and after ChatGPT's release. One thing became painfully clear: the ways AI systems can fail are exponentially more numerous than traditional software. Enterprise leaders today are under-estimating AI risks. Data privacy and hallucinations are just the tip of the iceberg. What enterprises aren't seeing: The gap between perceived and actual AI failure modes is staggering. - Enterprises think they're facing 10 potential failure scenarios… - when the reality is closer to 100. AI risks fall into two distinct categories that require completely different approaches: Internal risks: When employees use AI tools like ChatGPT, they often inadvertently upload proprietary information. Your company's competitive edge is now potentially training competitor's models. Despite disclaimer pop-ups, this happens constantly. External risks: These are far more dangerous. When your customers interact with your AI-powered experiences, a single harmful response can destroy brand trust built over decades. Remember when Gemini's image generation missteps wiped billions off Google's market cap? Shout out to Dr. Ratinder, CTO Security and Gen AI, Pure Storage. When I got on a call with Ratinder, he very enthusiastically explained to me their super comprehensive approach: ✅ Full DevSecOps program with threat modeling, code scanning, and pen testing, secure deployment and operations ✅ Security policy generation system that enforces rules on all inputs/outputs ✅ Structured prompt engineering with 20+ techniques ✅ Formal prompt and model evaluation framework ✅ Complete logging via Splunk for traceability ✅ Third-party pen testing certification for customer trust center ✅ OWASP Top 10 framework compliance ✅ Tests for jailbreaking attempts during the development phase Their rigor is top-class… a requirement for enterprise-grade AI. For most companies, external-facing AI requires 2-3x the guardrails of internal systems. Your brand reputation simply can't afford the alternative. Ask yourself: What AI risk factors is your organization overlooking? The most dangerous ones are likely those you haven't even considered.
-
AI Makes Software Supply Chain Attacks Even Worse 🧐 We've faced software supply chain attacks before, and in the AI era, these threats will only scale even further. It's crucial to rethink how we approach code and build security in this new reality. ⚠️ AI-driven coding tools are easy to use and productivity-boosting, but they're notoriously difficult to configure to align with organizational privacy and security policies. The genie is already out of the bottle, developers everywhere are adopting these tools rapidly. 🔙 Historical previous vulnerabilities get reintroduced: New AI-powered code generation trained on internal code repositories might unintentionally revive vulnerabilities previously patched. Why? Because LLMs prioritize functional correctness, not inherently secure code, and there's currently no robust, security-focused labeled dataset available to guide these models. The diversity of programming languages doesn’t make this problem any easier. 📉 Security reality check: The recent studies indicate that code generated by LLMs is only about ~40% secure even in optimal conditions. Functional correctness is not synonymous with security. 👉 https://baxbench.com 🤖⚡️ AI-agents already here, and they present a unique challenge: although they’re software, we often apply different (or insufficient) security standards or privacy policies. The risk of compromise or malicious takeover is real, and the consequences will intensify as these technologies will expose more to enterprises. New tech brings new responsibilities: I'm optimistic about AI’s long-term potential, but I’m deeply concerned about our readiness to defend against emerging threats at the pace AI adoption demands. The security guardrails we built just last year are already outdated and irrelevant in many cases. Tomorrow's threats require today's solutions. Traditional threat models and incident response playbooks no longer match AI-specific risks. We must proactively evolve our security mindset, practices, and tools to address the unique challenges of AI-era software development.
-
Yesterday, the National Security Agency Artificial Intelligence Security Center published the joint Cybersecurity Information Sheet Deploying AI Systems Securely in collaboration with the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre, and the United Kingdom’s National Cyber Security Centre. Deploying AI securely demands a strategy that tackles AI-specific and traditional IT vulnerabilities, especially in high-risk environments like on-premises or private clouds. Authored by international security experts, the guidelines stress the need for ongoing updates and tailored mitigation strategies to meet unique organizational needs. 🔒 Secure Deployment Environment: * Establish robust IT infrastructure. * Align governance with organizational standards. * Use threat models to enhance security. 🏗️ Robust Architecture: * Protect AI-IT interfaces. * Guard against data poisoning. * Implement Zero Trust architectures. 🔧 Hardened Configurations: * Apply sandboxing and secure settings. * Regularly update hardware and software. 🛡️ Network Protection: * Anticipate breaches; focus on detection and quick response. * Use advanced cybersecurity solutions. 🔍 AI System Protection: * Regularly validate and test AI models. * Encrypt and control access to AI data. 👮 Operation and Maintenance: * Enforce strict access controls. * Continuously educate users and monitor systems. 🔄 Updates and Testing: * Conduct security audits and penetration tests. * Regularly update systems to address new threats. 🚨 Emergency Preparedness: * Develop disaster recovery plans and immutable backups. 🔐 API Security: * Secure exposed APIs with strong authentication and encryption. This framework helps reduce risks and protect sensitive data, ensuring the success and security of AI systems in a dynamic digital ecosystem. #cybersecurity #CISO #leadership
-
21/86: 𝗜𝘀 𝗬𝗼𝘂𝗿 𝗔𝗜 𝗠𝗼𝗱𝗲𝗹 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗼𝗻 𝗣𝗲𝗿𝘀𝗼𝗻𝗮𝗹 𝗗𝗮𝘁𝗮? Your AI needs data, but is it using personal data responsibly? 🛑Threat Alert: If your AI model trains on data linked to individuals, you risk: Privacy violations, Legal & regulatory consequences, and Erosion of digital trust. 🔍 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝘁𝗼 𝗔𝘀𝗸 𝗕𝗲𝗳𝗼𝗿𝗲 𝗨𝘀𝗶𝗻𝗴 𝗗𝗮𝘁𝗮 𝗶𝗻 𝗔𝗜 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 📌 Is personal data necessary? If not essential, don't use it. 📌 Are unique identifiers included? Consider pseudonymization or anonymization. 📌 Do you have a legal basis? If the model uses PII, document your justification. 📌 Are privacy risks documented & mitigated? Ensure privacy impact assessments (PIAs) are conducted. ✅ What You Should Do ➡️ Minimize PII usage – Only use personal data when absolutely necessary. ➡️ Apply de-identification techniques – Use pseudonymization, anonymization, or differential privacy where possible. ➡️ Document & justify your approach – Keep records of privacy safeguards & compliance measures. ➡️ Align with legal & ethical AI principles – Ensure your model respects privacy, fairness, and transparency. Privacy is not a luxury, it’s a necessity for AI to be trusted. Protecting personal data strengthens compliance, ethics, and public trust in AI systems. 💬 How do you ensure AI models respect privacy? Share your thoughts below! 👇 🔗 Follow PALS Hub and Amaka Ibeji for more AI risk insights! #AIonAI #AIPrivacy #DataProtection #ResponsibleAI #DigitalTrust
-
ISO 5338 has key AI risk management considerations useful to security and compliance leaders. It's a non-certifiable standard laying out best practices for the AI system lifecycle. And it’s related to ISO 42001 because control A6 from Annex A specifically mentions ISO 5338. Here are some key things to think about at every stage: INCEPTION -> Why do I need a non-deterministic system? -> What types of data will the system ingest? -> What types of outputs will it create? -> What is the sensitivity of this info? -> Any regulatory requirements? -> Any contractual ones? -> Is this cost-effective? DESIGN AND DEVELOPMENT -> What type of model? Linear regressor? Neural net? -> Does it need to talk to other systems (an agent)? -> What are the consequences of bad outputs? -> What is the source of the training data? -> How / where will data be retained? -> Will there be continuous training? -> Do we need to moderate outputs? -> Is system browsing the internet? VERIFICATION AND VALIDATION -> Confirm system meets business requirements. -> Consider external review (per NIST AI RMF). -> Do red-teaming and penetration testing. -> Do unit, integration, and UA testing DEPLOYMENT -> Would deploying system be within our risk appetite? -> If not, who is signing off? What is the justification? -> Train users and impacted parties. -> Update shared security model. -> Publish documentation. -> Add to asset inventory. OPERATION AND MONITORING -> Do we have a vulnerability disclosure program? -> Do we have a whistleblower portal? -> How are we tracking performance? -> Model drift? CONTINUOUS VALIDATION -> Is the system still meeting our business requirements? -> If there is an incident or vulnerability, what do we do? -> What are our legal disclosure requirements? -> Should we disclose even more? -> Do regular audits. RE-EVALUATION -> Has the system exceeded our risk appetite? -> If an incident, do a root cause analysis. -> Do we need to change policies? -> Revamp procedures? RETIREMENT -> Is there business need to retain model or data? Legal? -> Delete everything we don’t need, including backups. -> Audit the deletion. Are you using ISO 5338 for AI risk management?