Today we are excited to launch Microsoft Cyber Pulse: a look inside Microsoft first-party telemetry and research, focused on AI security. It shows that 80% of Fortune 500 companies are already using AI agents. And while adoption is accelerating, many organizations lack the security, governance and observability needed to manage them. That visibility gap is now a business risk. Security, governance, and observability must be built in from the start — and it is critical that AI agents are protected like any user. Learn more in our new report : https://lnkd.in/d-_tXBw5
Huge insights here, Vasu. The visibility gap you highlighted is already showing up in co‑sell conversations across the ecosystem - partners are moving fast on AI agents, but security, governance, and observability often lag behind. This report gives the field and the partner community a clear framework for how to close that gap and build AI solutions that are secure, scalable, and co‑sell‑ready from day one.
Adoption is moving fast, but a lot of orgs are still catching up on governance and visibility. Treating AI agents like users from the start feels like a really practical way to avoid creating risk later.
This framing is important and timely. What the data makes visible is not only a tooling or security gap, but a coherence gap between agent autonomy and human accountability under acceleration. Observability, governance, and Zero Trust address exposure, but the invariant is whether authority, escalation, and decision ownership remain intact when agents act continuously. Without that decision coherence layer, visibility increases, but responsibility still diffuses precisely when pressure rises.
80% of Fortune 500 using AI agents without governance is the stat that should keep every CISO up at night. The visibility gap you're naming has three layers: security controls access, observability shows what happened, governance proves why. Most orgs will solve the first two and skip the third. Decision provenance on every agent action, what it accessed, what it chose, and why, is the layer that turns observability into accountability. That's the gap between monitoring agents and actually governing them.
Satya Nadella This is the Vendor Upsell Loop in its purest form. Microsoft is effectively saying: "We sold you the disease (Copilot/Agents), and now we are excited to sell you the cure (Cyber Pulse)." But the specific phrase "Critical that AI agents are protected like any user" is a dangerous architectural category error. If you give an Agent a "User ID," you are effectively Identity Laundering. You are granting a probabilistic script (which might be prompt-injected) the same rights as a vetted employee. Agents should not be Users. They should be ephemeral, capability-constrained service objects. We shouldn't be giving Agents 'User Accounts.' We should be giving them Cryptographic Constraints. An Agent shouldn't 'log in'; it should 'prove' it is running signed code in a trusted enclave before it can touch a single byte of data. Anything less is just Identity Laundering for software.
The visibility gap you describe shows up again at the decision layer. We can see agents, we can secure their infrastructure, but we still struggle to prove that critical decisions were made within acceptable bounds and under human accountability. In other words, observability tells us what agents did. Governance has to answer who was responsible when models disagreed and why a particular action went through. That’s where many organizations still lack an architectural answer.
Great read. Thanks for sharing. In AI security, the principle remains clear: you can’t protect what you can’t see, and you can’t govern what you don’t understand. Without visibility into data flows, model behavior, and system dependencies, even the most advanced defenses fail to address hidden risks. Real security begins with understanding -mapping every component, dataset, and decision path across the AI lifecycle.
This is the mindset (managing AI like employees). AI is a business risk just like any other, and it needs to be governed accordingly. Blind trust in autonomous systems is not a strategy, especially for those of us in security who know how often technology behaves in unexpected ways. Governance cannot be reactive. It requires human-in-the-loop oversight, continuous monitoring, clear ownership, and visibility into where agents operate and what they can access. Just as with cyber risk, losses are not hypothetical; they are inevitable. The differentiator will be who builds resilience early by understanding potential loss scenarios, quantifying exposure, and aligning AI activity with risk appetite before scale outpaces control. This is exactly the philosophy behind our AI governance suite, built to give leadership that visibility and control from day one. https://hubs.li/Q03SN0-n0
Thanks for sharing the report, Vasu. As AI becomes more embedded, security has to evolve with it. It's important that we build systems grounded in trust.
AI has evolved into our essential daily assistant, yet complex engineering demands even more from its capabilities. In high-intensity workflows, we encounter a critical frontier: "context drift." When extensive new data begins to displace foundational constraints, it necessitates manual intervention to maintain precision. However, the breakthrough lies in the methodology. By adopting a "Mentor-Mentee" framework—guiding the model through 3 to 4 iterative cycles—we can overcome these memory plateaus. This collaborative approach doesn't just solve the problem; it unlocks massive time savings and delivers high-fidelity results that were previously out of reach.