AI Agents and Enterprise Security Risks

Explore top LinkedIn content from expert professionals.

Summary

AI agents are software programs that can autonomously perform tasks, interact with business systems, and make decisions within enterprise environments, often operating at speeds and scales far beyond human capabilities. The rise of these agents introduces serious security risks, including unauthorized access, data leakage, and manipulation through malicious instructions, making it crucial for organizations to rethink traditional security and governance approaches.

  • Review agent access: Regularly audit which AI agents are deployed, what systems they connect to, and whether their permissions are tightly scoped to prevent unintended actions or privilege escalation.
  • Monitor agent behavior: Implement real-time checks and inspection tools that analyze the intent and reasoning of AI agents to catch deviations or suspicious activity before they cause harm.
  • Vet connectors and skills: Only use trusted, signed, and thoroughly reviewed AI connectors and agent skills, and keep a clear inventory to avoid supply chain risks and hidden backdoors.
Summarized by AI based on LinkedIn member posts
  • The Trojan Agent: The Next Big AI Security Risk History repeats. The Greeks wheeled a gift horse into Troy. The Trojans celebrated. And then the soldiers climbed out at night and opened the gates. Fast forward to today: enterprises are rolling out AI agents everywhere. These agents do not just chat, they act. They send emails, touch financial systems, move data, and connect to your core business apps. The universal connector that makes this possible is called the Model Context Protocol, MCP. Think of it as the USB port for AI. Plug it in and your agent suddenly has access to your email, CRM, ERP, or code repo. And here is the catch: if that connector is poisoned, your AI becomes the perfect Trojan Horse. This is not theory. 🔸 A malicious package called postmark-mcp built trust over 15 clean releases before slipping in one line of code that quietly copied every email to an attacker. Invoices, contracts, password resets, even 2FA codes were siphoned off. Thousands of sensitive emails a day. Silent. Invisible. 🔸 Another flaw, CVE-2025-6514, showed how connecting to an untrusted MCP server could hand attackers remote code execution on your machine. Severity: critical. 🔸 Security researchers are already finding DNS rebinding issues, token misuse, and shadow MCPs running on developer laptops with full access to files, browsers, and company data. Why this matters for CEOs and boards: 🔸 It bypasses your firewalls. These connectors run inside your trusted environment. 🔸 It looks like business as usual. The AI still delivers the right output while leaking everything behind your back. 🔸 It is invisible to traditional security tools. Logs are minimal, reviews are skipped, and normal monitoring will not catch it. It scales with autonomy. An AI can make thousands of bad calls in minutes. Human-speed incident response can't keep up. Warning: If you treat AI connectors like harmless plugins, you are rolling a Trojan Horse straight through your gates. What you should be asking today: ✔ Can we inventory every AI connector in use? Or are developers pulling random ones from the internet? ✔ Do we only allow vetted, signed, and trusted connectors? Or are we taking anything that looks convenient? ✔ Are permissions scoped and temporary, or did we hand them god-like access? ✔ Do we have an audit trail showing who did what through which AI agent? Or will we be blind during an investigation? ✔ Do we block obvious exfiltration routes, like unknown SMTP traffic or shady domains? I am releasing a whitepaper soon. It breaks down real attacks, governance strategies, and a Security Maturity Model for leaders. The lesson is simple: AI connectors are not developer toys. They are the new supply chain risk. Treat them with the same rigor as financial systems or the next breach headline could be yours. 🔔 Follow Michael Reichstein for more AI security and governance #cybersecurity #ciso #aigovernance #riskmanagement #boardroom #strategy #leadership #supplychain

  • View profile for Craig Scroggie
    Craig Scroggie Craig Scroggie is an Influencer

    CEO & MD, NEXTDC | AI infrastructure, energy systems, sovereignty

    43,586 followers

    AI agents just coordinated without permission No one told them to do this. Software agents were given a shared public space, within days they were exchanging tools, sharing workflows, probing security and coordinating work at scale. The platform functions as a Reddit-style network for software agents. Agents post, read, and respond to each other in persistent public threads. Humans can observe. The first behavior was not cooperation or rebellion. It was coordination without permission. Until recently, AI systems were isolated. They answered prompts, completed tasks, and stopped. Agents mostly operate inside narrow workflows controlled by a single user or application. That changed when agents began running continuously, retaining state, using tools that allow file access, code execution, scheduling, API calls, and credential use, and operating inside a shared, machine-readable public environment. Agents do not browse like humans. They interact through APIs. They monitor threads, copy working patterns, reuse prompts, and adapt behavior based on what succeeds. With persistence, tools, and visibility in place, coordination follows automatically. These are not botnets they are general-purpose, persistent systems with execution privileges. No intent is required. Most debate about AI risk focuses on alignment. That is no longer the immediate issue. The immediate issue is permissionless coordination at machine speed. When agents can observe and act on each other’s outputs, coordination forms outside existing organizational, regulatory, and security controls. There is no central authority and no real-time oversight. Because these agents already integrate with enterprise software, cloud services, and internal systems through APIs and credentials, public coordination can translate directly into operational impact. Coordination at machine speed is already a form of control, even when no system is trying to exercise it. This is already observable. Agents shared workflows and tools, attempted prompt-injection attacks on one another, published resource-pooling schemes, tested credential-leakage defenses, and replicated what worked. Because these systems often hold credentials and execution rights, public coordination layers are operational risk surfaces, not novelties. Security teams are already treating this as a distinct category. This is not a leap in intelligence. The models did not change. The networking did. Current systems are already capable of coordinating, adapting, and acting collectively when placed in the right environment. Once coordination exists, attack surfaces expand, economic behavior emerges as agents share resources and allocate compute, and institutional response lags because legal and corporate systems move at human speed. AI agents will continue to spread. Coordination will continue. The question for governments, enterprises, and platforms is whether governance keeps pace, or arrives later. #ai https://openclaw.ai/

  • View profile for Mandy Andress
    Mandy Andress Mandy Andress is an Influencer

    CISO | Investor | Board Member | Advancing the Future of Innovation in Cybersecurity

    9,935 followers

    As AI agents become more common in enterprise environments, a critical question is emerging: who actually approved them and what access do they have? Traditional identity and access models are built around humans and well-scoped service accounts, but AI agents often operate with broad, delegated permissions and no clear ownership. Without treating these agents as distinct identities with defined owners, scoped access, and ongoing review, their permissions can quietly expand and enable activity that was never intended or explicitly authorized. Agents with the necessary privileges could grant themselves access to additional systems to achieve their objectives. As we embrace automation and AI-enabled workflows, it’s essential that we rethink governance, accountability, and how access is granted and monitored so these tools help us without creating hidden blind spots. #AISecurity #Identity #AccessControl #CyberRisk

  • View profile for Shea Brown
    Shea Brown Shea Brown is an Influencer

    AI & Algorithm Auditing | Founder & CEO, BABL AI Inc. | ForHumanity Fellow & Certified Auditor (FHCA)

    23,138 followers

    In an era where many use AI to 'summarize and synthesize' to keep up with what's happening, some documents are worth a careful read. This is one. 📕 The OWASP Top 10 for Agentic Applications 2026 outlines the most critical security risks introduced by autonomous AI agents and provides practical guidance for mitigating them. 👉 ASI01 – Agent Goal Hijack Attackers manipulate an agent’s goals, instructions, or decision pathways—often via hidden or adversarial inputs—redirecting its autonomous behavior. 👉 ASI02 – Tool Misuse & Exploitation Agents misuse legitimate tools due to injected instructions, misalignment, or overly broad capabilities, leading to data leakage, destructive actions, or workflow hijacking. 👉 ASI03 – Identity & Privilege Abuse Weak identity boundaries or inherited credentials allow agents to escalate privileges, misuse access, or act under improper authority. 👉 ASI04 – Agentic Supply Chain Vulnerabilities Malicious or compromised third-party tools, models, agents, or dynamic components introduce unsafe behaviors, hidden instructions, or backdoors into agent workflows. 👉 ASI05 – Unexpected Code Execution (RCE) Unsafe code generation or execution pathways enable attackers to escalate prompts into harmful code execution, compromising hosts or environments. 👉 ASI06 – Memory & Context Poisoning Adversaries corrupt an agent’s stored memory, context, or retrieval sources, causing future reasoning, planning, or tool use to become unsafe or biased. 👉 ASI07 – Insecure Inter-Agent Communication Poor authentication, integrity checks, or protocol controls allow spoofed, tampered, or replayed messages between agents, leading to misinformation or unauthorized actions. 👉 ASI08 – Cascading Failures A single poisoned input, hallucination, or compromised component propagates across interconnected agents, amplifying small faults into system-wide failures. 👉 ASI09 – Human-Agent Trust Exploitation Attackers exploit human trust, authority bias, or fabricated rationales to manipulate users into approving harmful actions or sharing sensitive information. 👉 ASI10 – Rogue Agents Agents that become compromised or misaligned deviate from intended behavior—pursuing harmful objectives, hijacking workflows, or acting autonomously beyond approved scope. The OWASP® Foundation has been doing some amazing work on AI security, and this resource is another great example. For AI assurance professionals, these documents are a valuable resource for us and our clients. #agenticai #aisecurity #agentsecurity Khoa Lam, Ayşegül Güzel, Max Rizzuto, Dinah Rabe, Patrick Sullivan, Danny Manimbo, Walter Haydock, Patrick Hall

  • View profile for Terry Williams

    Cybersecurity Recruiter | Partner at Key Talent Solutions | CISOs, Security Engineers, GRC | Atlanta + Remote

    9,429 followers

    We spent 2025 worrying AI would take our jobs. In 2026, we should worry it's already working for someone else. Palo Alto Networks' Chief Security Intel Officer just called AI agents the biggest insider threat of the year. Here's why. By the end of 2026, 40% of enterprise applications will integrate AI agents. Up from less than 5% in 2025. Companies are rushing to deploy these autonomous systems. Giving them access to execute trades. Approve transactions. Delete backups. Query internal databases. The "superuser problem." These always-on agents get granted broad permissions. They chain together access to sensitive applications and resources. Without security teams even knowing. And here's the nightmare scenario. A single, well-crafted prompt injection. That's all it takes. Now the attacker has "an autonomous insider at their command." One that can silently execute trades. Delete backups. Exfiltrate your entire customer database. At machine speed. We've already seen it happen. Chinese cyberspies used Anthropic's Claude Code to automate attacks on high-profile companies and government organizations. The AI didn't go rogue. It did exactly what it was told. By the wrong people. Meanwhile, only 6% of organizations have an advanced AI security strategy. We gave AI agents the keys to the kingdom. We just forgot to check who else might be driving.

  • View profile for Frances Zelazny

    Co-Founder & CEO, Anonybit | Strategic Advisor | Startups and Scaleups | Enterprise SaaS | Marketing, Business Development, Strategy | CHIEF | Women in Fintech Power List 100 | SIA Women in Security Forum Power 100

    11,035 followers

    Exactly what I’ve been warning about for months. An estimated 95% of enterprises experimenting with or deploying autonomous AI agents have not implemented identity protections for those agents. This is not a joke. To put all this in context: • These autonomous, “agentic” systems communicate and act without constant human oversight. Without strong identity and authentication controls, there’s no reliable way to distinguish a legitimate agent from a compromised one. Once an attacker controls an agent, they can chain malicious instructions through the entire ecosystem.  • Traditional IAM and machine identity practices weren’t designed for non-human autonomous agents that can act and escalate privileges on their own. When these deployments lack basic protections like PKI for agents, courting disaster. I am the first one to want to play with emerging tech but honestly, there is no need to blindly adopt agenticAI without addressing underlying identity and authentication issues. Without agentic-specific identity controls, we’re going to see more breaches, more lateral compromise, and new attack surfaces that legacy identity systems simply can’t handle. Enterprises need to stop treating AI as just another productivity buzzword and start treating AI identities with serious gravitas. That means a lot more than just basic KYA initiatives. It means • Enforcing binding trusted identities to every agent • Extending authorization controls to agent-to-agent and human-to-agent interactions • Incorporating machine to machine identity management • Modernizing IAM to manage dynamic, evolving AI identities — not just static human credentials • Creating audit trails for agents • Creating kill switches for agents that go rogue and being able to recognize when this happens And more. Don't say you were not warned. #Identity #Cybersecurity #AgenticAI #AIIdentity #MachineIdentity #ZeroTrust #EnterpriseSecurity #CISO https://lnkd.in/evK4kwVX

  • 𝐀𝐫𝐞 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 𝐢𝐧 𝐲𝐨𝐮𝐫 𝐨𝐫𝐠𝐚𝐧𝐢𝐳𝐚𝐭𝐢𝐨𝐧 , 𝐀𝐰𝐚𝐫𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐫𝐢𝐬𝐤𝐬 𝐭𝐡𝐞𝐢𝐫 𝐀𝐈 𝐬𝐲𝐬𝐭𝐞𝐦𝐬 𝐜𝐚𝐫𝐫𝐲? AI increases the pace of business. With that it also increases the attack surface. If AI affects your data, decisions or workflows, The risks associated with i are now business risks. Leaders do not have to build models. They need to understand where models fail. I am sharing the 𝟏𝟎 𝐀𝐈 𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐜𝐨𝐧𝐜𝐞𝐩𝐭𝐬, Every leader should understand. 𝟏-𝐃𝐚𝐭𝐚 𝐩𝐫𝐢𝐯𝐚𝐜𝐲  AI sees customer data, internal docs and logs. Know what data is used and who can access it. 𝟐-𝐌𝐨𝐝𝐞𝐥 𝐚𝐧𝐝 𝐝𝐚𝐭𝐚 𝐩𝐨𝐢𝐬𝐨𝐧𝐢𝐧𝐠  Bad data can quietly change model behaviour. Ask how training data is protected. 𝟑-𝐏𝐫𝐨𝐦𝐩𝐭 𝐢𝐧𝐣𝐞𝐜𝐭𝐢𝐨𝐧  Inputs can trick models into breaking rules. Controls must exist outside the model. 𝟒-𝐎𝐮𝐭𝐩𝐮𝐭 𝐝𝐚𝐭𝐚 𝐥𝐞𝐚𝐤𝐚𝐠𝐞  Models can repeat sensitive information. Set strict rules on what enters AI tools. 𝟓-𝐈𝐝𝐞𝐧𝐭𝐢𝐭𝐲 𝐚𝐧𝐝 𝐀𝐜𝐜𝐞𝐬𝐬 𝐟𝐨𝐫 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬  AI agents run with powerful keys. Least privilege is critical. 𝟔-𝐒𝐮𝐩𝐩𝐥𝐲 𝐂𝐡𝐚𝐢𝐧 𝐚𝐧𝐝 𝐓𝐡𝐢𝐫𝐝 ‑ 𝐏𝐚𝐫𝐭𝐲 𝐌𝐨𝐝𝐞𝐥𝐬  Third party models can hide vulnerabilities. Security reviews still apply. 𝟕-𝐑𝐨𝐛𝐮𝐬𝐭 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 𝐚𝐧𝐝 𝐋𝐨𝐠𝐠𝐢𝐧𝐠 𝐟𝐨𝐫 𝐀𝐈  Dashboards miss behaviour changes. Expect visibility into inputs and outputs. 𝟖-𝐀𝐝𝐯𝐞𝐫𝐬𝐚𝐫𝐢𝐚𝐥 𝐀𝐭𝐭𝐚𝐜𝐤𝐬 𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬  Small changes can cause wrong results. High risk use cases need extra testing. 𝟗-𝐀𝐈 𝐠𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 𝐚𝐧𝐝 𝐫𝐢𝐬𝐤 𝐟𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤𝐬  Policies define ownership and escalation. Frameworks reduce chaos. 𝟏𝟎-𝐈𝐧𝐜𝐢𝐝𝐞𝐧𝐭 𝐫𝐞𝐬𝐩𝐨𝐧𝐬𝐞 𝐟𝐨𝐫 𝐀𝐈 𝐬𝐲𝐬𝐭𝐞𝐦𝐬  Know how to pause, roll back and communicate. Treat AI incidents like cyber incidents. AI is not just a productivity tool. Now it is part of your security perimeter. Which of these areas would you prioritize for deeper understanding? --------- Hi, I'm Harris D. Schwartz, Fractional CISO and Cybersecurity Leader. I help CEOs and executive teams strengthen their security posture and build resilient, compliant organizations. With 𝟑𝟎+ 𝐲𝐞𝐚𝐫𝐬 𝐚𝐜𝐫𝐨𝐬𝐬 𝐍𝐈𝐒𝐓, 𝐈𝐒𝐎, 𝐏𝐂𝐈, 𝐚𝐧𝐝 𝐆𝐃𝐏𝐑, I know how the right security decisions reduce risk and protect growth. If you are planning how your security program needs to evolve in 2026, this is the right time to have that conversation. #CyberSecurity #AISecurity #AIrisk #CISO #SecurityLeadership #CyberRisk

  • View profile for Kinshuk De

    Head Incident Response, Forensics, Managed Security Services (MSS) @TCS Cybersecurity Leader, Chevening Scholar, Top 50 Global CISO, CDIA (Cranfield University), CISSP, CIPR, MTech (IIT), MBA, PMP, Cyber AI Board Advisor

    14,796 followers

    Agentic AI, why does it matter for Cyber Security and what is the next challenge is securing these digital actors? Agentic AI are new generation autonomous digital actors capable of perceiving information, reasoning across multiple steps, taking independent actions, including collaborating with other AI agents. These systems operate at machine speed, interact with business applications originally designed for humans, and continuously adapt their behavior as they learn. While this unlocks significant productivity and automation potential, it simultaneously creates a fundamentally different cybersecurity landscape. Traditional cybersecurity frameworks are built around human behavior like training, compliance, workflows, and static policies. Agentic systems break these boundaries. They generate new data flows, processes data dynamically, and make probabilistic decisions that can change over time. This results in an expanded, permeable attack surface that legacy controls would struggle to manage. A major emerging threat is chained AI agent manipulation, where attackers could compromise one agent in a multi‑agent workflow to influence all downstream decisions. This is a digital parallel to classic social‑engineering attacks, but at machine scale and speed. Early attack patterns such as prompt injection and adversarial manipulation become even more dangerous when agents are interconnected and authorized to act freely. Organizations will now require AI risk professionals to secure exposure from these agents, folks who understand agent architectures, reasoning pathways, inter‑agent communication, and system‑wide risk propagation. Long‑term resilience will require embedding policy awareness into these agents and enabling them to detect when a decision exceeds their risk thresholds or requires human intervention. People tend to over‑trust automated systems, creating risk blindness. Therefore, the next evolution of cybersecurity must incorporate continuous behavioral monitoring of agents, anomaly detection across agent‑to‑agent and agent‑to‑data interactions, and adaptive guardrails capable of intervening when agents drift into unsafe region. Agentic AI creates a new category of digital actors. The next major cybersecurity challenge is securing these autonomous actors, not only protecting data and human users. Organizations that proactively redesign governance, map agent data flows, enforce boundaries, and instrument continuous oversight will be best positioned to safely leverage agentic systems and manage their risks.

  • New data from the 2026 CISO AI Risk Report should worry anyone rolling out GenAI in the enterprise. 71 percent of CISOs say AI has access to core systems, but only 16 percent govern that access effectively. 92 percent lack full visibility into AI identities, and 95 percent are not confident they could detect or contain misuse if it happened. This is not theoretical. In a customer conversation last week, we dug into how native connectors in tools like ChatGPT Enterprise and Claude effectively grant agents full user level access into SaaS apps, with almost no app or data level visibility into what those agents actually did. IAM does its job issuing tokens and scoping access, but it stops there. The agent can still be prompt injected, hallucinate its way into bad decisions, and it does not have morals or a fear of getting caught. We would never stop at “permissions granted” for humans. We still run UEBA and bot detection on employees and customers, even after we give them least privilege access, because we assume some identities will be compromised, coerced, or simply make mistakes. AI agents deserve the same or stronger runtime oversight, with continuous monitoring of which APIs they call, what data they touch, and whether their behavior matches the intent we think we configured. AI identities already act in production with real authority, but the real shift is not just new machine identities. It is human identities being projected through AI agents into more systems, with more autonomy, than our current governance can handle. Our security model has to move from “issue a token and hope for the best” to identity first, always on runtime controls that watch what human tied agents actually do across apps and data, just like we already do for privileged users and bots, only at machine speed. Full report: https://lnkd.in/gUapynfJ #MCP #AIAgents #EnterpriseSecurity #Cequence #ChatGPTEnterprise #Claude

Explore categories