Great presentations by Lakera‘s David Haber at Check Point Software#sko2026 on how Check Point is helping secure organisations shift to AI. It’s about the fundamentals but also ensuring you have guardrails and threat prevention that are built for the new AI era. This includes discovery, governance and guardrails across the ecosystem in an open garden approach
Raymond SchippersDavid Haber Great to see the focus on fundamentals. Guardrails, discovery and governance are the right starting point.
What’s emerging now is the next boundary: when AI systems begin acting inside live environments, not just generating outputs.
At that moment the enterprise question becomes much sharper:
Who held authority when the action executed?
What escalation occurred if the system was uncertain?
And what evidence survives afterwards?
As organisations move from AI assisting to AI operating, governance has to move from policy and monitoring to control at the moment of action.
That’s where the next wave of security architecture will be defined.
Strike48 launched two months ago with a simple premise: AI agents in security operations are only as useful as what they can see. If your agents can't access all your log data, they can't do real work.
We're closing that log visibility gap so teams can stop paying to store the same logs in multiple places and stop adding different AI tools to each data silo.
If you want to talk about what agentic log intelligence looks like in practice, let’s chat.
#RSAC
Today, we're excited to announce our partnership with TrueFoundry 🤝
Akto Argus 🐬 now connects natively into TrueFoundry's AI Gateway to bring runtime security directly to your AI Agents.
What does that mean for your stack? You can now understand the security risks of your AI Agents and enforce guardrails with Akto + Trufoundry.
Awesome to work with Anuraag Gutgutia and the TF team for the partnership!
Akash Vineet Ankush Jain
Agentic AI is revolutionizing decision intelligence in security ops by providing real-world context and intelligent automation, allowing teams to focus on genuine risk reduction. Sysdig and Omdia (by Informa TechTarget) explored how this shift is transforming SecOps in a recorded discussion. Watch the replay here 👉 https://okt.to/NhtmMd
The rise in AI bots has created new challenges for accurately assessing humanitarian data use on websites like HDX.
Managing bot identification and platform safety requires a reallocation of resources towards infrastructure and specialized skillsets, ensuring our platforms remain human-first but AI-ready.
Learn more: https://lnkd.in/eAHD7iwn
The rise of personal AI agents like Clawdbot and Moltbot is exciting, but it also introduces a new layer of security risk that many organizations are not yet fully considering.
As AI agents become more embedded in our workflows, AI security and governance must evolve just as quickly.
Personal AI agents like OpenClaw (formally known as Clawdbot and Moltbot) introduce a new attack surface — and we've got the code to prove it.
Our open source Skill Scanner uncovers hidden risks in agent skill files.
👉 https://cs.co/6045B6EKS1
A few weeks ago I shared a post about the AI supply chain risk and how Cisco AI Defence can help protect your data. https://lnkd.in/eJByxt_e
This is a perfect example of what can happen when organisations expose AI systems without the right guardrails in place.
Take Skill Scanner, for example. It’s still only a best-effort security scanner for AI agent skills that aims to detect:
• Prompt injection
• Data exfiltration risks
• Malicious code patterns
It combines pattern-based detection (YAML + YARA), LLM-as-a-judge techniques, and behavioural dataflow analysis to maximise detection coverage while minimising false positives.
Tools like this are useful, but they also highlight an important point.. security cannot be an afterthought in the AI era.
Security needs to be defined, intentional, and holistic across everything you do, from the models you use to the data you expose to the integrations across your AI supply chain.
Don’t let the excitement of innovation become your weakest security control.
Personal AI agents like OpenClaw (formally known as Clawdbot and Moltbot) introduce a new attack surface — and we've got the code to prove it.
Our open source Skill Scanner uncovers hidden risks in agent skill files.
👉 https://cs.co/6045B6EKS1
🚨 Introducing Black Duck Signal 🚨
Agentic AppSec that automatically finds and fixes security defects — without false positives or AI hallucinations.
✅ Works with Claude Code, Google Gemini, and GitHub Copilot
✅ Supports any language, from legacy COBOL to modern Rust
✅ Powered by ContextAI — trained on petabytes of human‑validated security intelligence
No noise.
All Signal.
#appsec#blackduck#cybersecurity#applicationsecurityhttps://lnkd.in/d6NtEnPv
Black Duck Signal, the first application security solution that keeps pace with AI-driven development.
What makes Signal fundamentally different?
ContextAI™ - our purpose-built security model containing 20+ years of battle-tested intelligence from thousands of real-world codebases. While generic AI tools struggle with hallucinations and false positives, Signal combines LLM reasoning with petabytes of human-vetted security ground truth.
The result: Analysis that eliminates the noise and delivers fixes that work.
📍Come by booth S-1027 at #RSAC2026 to get a demo
👉 Read the blog: https://bit.ly/4sxDf7I#BlackDuck#BlackDuckSignal
AI agents are now interacting with real workflows, customers, and data – yet most security controls still assume a world without autonomous systems.
At Glean, we’re working directly with security and IT leaders on how to close that gap. On Thursday, March 12, at 10AM PT, we’re hosting a Security Showcase: Deploy AI Agents with Confidence to share how we think about AI ambitions and security readiness together.
Sign up to feel confident about deploying AI agents at your organization: https://lnkd.in/e_jWVF3d
Raymond Schippers David Haber Great to see the focus on fundamentals. Guardrails, discovery and governance are the right starting point. What’s emerging now is the next boundary: when AI systems begin acting inside live environments, not just generating outputs. At that moment the enterprise question becomes much sharper: Who held authority when the action executed? What escalation occurred if the system was uncertain? And what evidence survives afterwards? As organisations move from AI assisting to AI operating, governance has to move from policy and monitoring to control at the moment of action. That’s where the next wave of security architecture will be defined.