One deepfake video call can result in a $25M loss, especially when your employees' training isn’t built for today’s threats. Our SVP & Chief Evangelist, Chris Murphy, breaks down the widening gap between AI-powered social engineering and outdated security awareness training in his latest article for @informationsecuritybuzz. The TL;DR? 🔹 AI has eliminated the typical red flags we’re trained to spot. 🔹 Completion rates on today’s SAT training do not equal a secure organization. 🔹 HR needs to collaborate with your security team. 🔹 Behavioral metrics > checkbox training. Read the full article 👉 https://hubs.ly/Q0460bGg0 #Cybersecurity #SecurityAwareness #SocialEngineering #AI #Deepfakes #HumanRisk #SecurityTraining #CISO
AI-Powered Social Engineering Threats Outpace Outdated Security Training
More Relevant Posts
-
As if cybersecurity challenges weren’t already enough, AI is adding an entirely new layer of risk. Threat actors are now leveraging AI to move faster, scale attacks, and become more sophisticated than ever before. What used to take days now takes minutes. But here’s the real issue… Complacency. Too many business leaders, especially in law firms, still view cybersecurity as a back-office IT function rather than a critical business priority. The lack of urgency, combined with rapidly evolving threats, is creating a perfect storm. Sensitive client data. Strict compliance requirements. Reputational risk. Law firms, in particular, sit at the center of high-value information, yet many remain underprepared for the realities of today’s threat landscape. AI isn’t the future of cyber risk, it’s the present. And the organizations that fail to adapt quickly will be the ones learning the hardest lessons. Now is the time to shift from awareness to action. #Cybersecurity #ArtificialIntelligence #LawFirms #RiskManagement #DataSecurity #Leadership
To view or add a comment, sign in
-
🎨 AI vs Human Trust. The New Battlefield Attackers aren’t just targeting systems anymore. They’re targeting how humans decide who to trust. The Psychology of Urgency AI-generated attacks often include: • Time pressure • Confidentiality requests • Financial urgency • Authority-based demands Urgency reduces critical thinking. Attackers know this. 💡Full details at: https://lnkd.in/eh-D2VrW 🕵️♀️Felix H Martinez Helping you stay safe & global organizations stay ahead. #AI #AIagents #DigitalTrust #Awareness #AIEthics #CyberSecurity #Infosec #CISO #Prompt #Socialengineering
To view or add a comment, sign in
-
-
You don’t control technology once it’s born. You influence its direction. That is the reality cybersecurity lives in. The tension between what we build and what it becomes. Systems do not stay static. They evolve, adapt, and get used in ways we never fully intended. Controls help, but they do not equal control. AI amplifies this gap. A small decision such as a dataset choice, a model behavior, or a default configuration can scale into systemic impact. What feels minor at build time can reshape outcomes at scale. This is why cybersecurity is shifting. It is no longer just about defending systems. It is about guiding them. Because the real question is not Can we control this It is How do we responsibly influence what it becomes #CyberSecurity #AI #Technology #Risk #Leadership #cyberlysafe #CyberSecurity #StaySafe #Infosec
To view or add a comment, sign in
-
-
57 days. Most teams think that’s enough time to “prepare.” It’s not. Because what’s coming in 2026 isn’t another trend. It’s pressure. From AI. From regulation. From the board. And the real question is: Will your decisions hold up when they’re challenged by peers who carry the same weight? At Grand IT Security, you don’t sit and listen. You pressure-test. 30 roundtables. Real scenarios. Closed-room discussions. Not for everyone. And that’s the point. #granditsecurity #cybersecurity #ciso #infosec #riskmanagement #ai #cyberleaders #networking #executive #stockholm
To view or add a comment, sign in
-
-
AI is already being used inside most firms — whether leadership has approved it or not. The real question isn’t “Are we using AI?” It’s “How are we controlling it?” Curious to hear how others are approaching this: 👉 Does your firm have a formal AI usage policy yet? 👉 Are there specific tools your organization has approved for handling internal or client information? In many cases, the biggest risk isn’t the technology — it’s the lack of clear guidelines. Interested in your perspective. #AICompliance #Cybersecurity #AIDataPrivacy #AIgovernance #RiskManagement #InformationSecurity
To view or add a comment, sign in
-
Most organizations have an AI policy document. Almost none have an AI governance strategy. The difference matters more than most leadership teams realize. A policy document tells you what is allowed. A governance strategy tells you who is accountable when something goes wrong, how risk is assessed before deployment, and what happens when a model makes a decision that harms someone. Right now, organizations worldwide are deploying AI at speed and writing policies to catch up. That sequence is the problem. Governance isn't a document you produce after the fact. It's a discipline you build before you need it. The organizations that understand this today will not be the ones managing a crisis in 2026. What's missing from your organization's AI governance approach right now? We'd like to hear what you're seeing on the ground. #AIGovernance #CyberSecurity #DataPrivacy #RiskManagement #Cybrix
To view or add a comment, sign in
-
We’ve spent decades hardening systems… just in time for the threat to move beyond them. Firewalls got smarter. Access got tighter. Monitoring got sharper. And yet… something doesn’t sit right. AI is no longer sitting in the background waiting for commands. It’s shaping decisions. Guiding outcomes. Influencing judgment in ways that don’t show up in traditional logs. And here’s where it gets uncomfortable… Awareness is rising. But control is not keeping pace. That gap...between what we see and what we actually govern... that’s where risk lives now. This isn’t a knowledge problem. It’s a control problem. Because in this environment, you’re not just managing systems… you’re managing influence. Standing still has reciprocating consequences. So the question becomes... are you observing the signal… or reacting to the noise? 🎥 I put this into a short visual breakdown that connects the dots. Watch the video below👇 Cognitive Security Institute #CyberBay2026 #AISecurity #Cybersecurity #CIOCs #PersistentKnowledgeDilemma #PKD
To view or add a comment, sign in
-
🎙️ Episode 4 of The Tech Q&A is live – The AI-First Approach to Cybersecurity. AI is everywhere right now. But when it comes to cybersecurity, the real question is: what does an AI-first strategy actually look like in practice? In this episode, I’m joined by Derek Stephenson to explore how organisations are starting to integrate AI into their security strategy and where the real opportunities (and risks) sit. This isn’t about hype. It’s about practical implementation. Here’s what we covered: 🤖 What an AI-first cybersecurity strategy actually means for organisations 🔎 How AI is being used to improve threat detection and security operations 👥 Why human oversight is still critical, even with advanced AI tools 📈 How skills in cybersecurity teams are evolving as AI adoption increases 📜 The growing need for standards and frameworks around AI security ⚠️ The rise of shadow AI and how organisations should think about governing it 🔮 How AI could reshape the future of cyber attacks and defence Derek shares some great insights on how security leaders should be thinking about AI today not just as a tool, but as a strategic shift in how we approach cyber defence. As always with Tech Q&A: 5 focused questions. Straight to the point. Conversations that move the needle. How are you seeing AI impact cybersecurity in your organisation right now? 👇 #cybersecurity #AI #cyberdefence #securitystrategy #artificialintelligence #infosec https://lnkd.in/e-ZBkJxn
The AI-First Approach to Cybersecurity
https://www.youtube.com/
To view or add a comment, sign in
-
🎙️ Episode 4 of The Tech Q&A is live – The AI-First Approach to Cybersecurity. AI is everywhere right now. But when it comes to cybersecurity, the real question is: what does an AI-first strategy actually look like in practice? In this episode, I’m joined by Derek Stephenson to explore how organisations are starting to integrate AI into their security strategy and where the real opportunities (and risks) sit. This isn’t about hype. It’s about practical implementation. Here’s what we covered: 🤖 What an AI-first cybersecurity strategy actually means for organisations 🔎 How AI is being used to improve threat detection and security operations 👥 Why human oversight is still critical, even with advanced AI tools 📈 How skills in cybersecurity teams are evolving as AI adoption increases 📜 The growing need for standards and frameworks around AI security ⚠️ The rise of shadow AI and how organisations should think about governing it 🔮 How AI could reshape the future of cyber attacks and defence Derek shares some great insights on how security leaders should be thinking about AI today not just as a tool, but as a strategic shift in how we approach cyber defence. As always with Tech Q&A: 5 focused questions. Straight to the point. Conversations that move the needle. How are you seeing AI impact cybersecurity in your organisation right now? 👇 #cybersecurity #AI #cyberdefence #securitystrategy #artificialintelligence #infosec https://lnkd.in/enZcX4zn
The AI-First Approach to Cybersecurity
https://www.youtube.com/
To view or add a comment, sign in
Explore related topics
- What Employees Need to Know About Deepfakes
- How Deepfakes Affect Financial Security
- How to Protect Your Organization From Deepfake Scams
- How Deepfake Technology Threatens Business Security
- Understanding AI and Deepfake Technology
- Identifying Red Flags in Deepfake Fraud
- How Deepfakes Affect Business Operations
- How Deepfakes Impact Cybersecurity
- How to Understand Deepfake Threats
- Impact of Deepfakes on Online Trust
Fire Mountain Labs•1K followers
3wNice piece. If you put the threat in terms of intent, capability, and opportunity - the adversary, leveraging AI, now has vast capability to do harm. The intent/incentive (part of the human confition, unfortunately)to do harm is not going away. The shift towards a fully digital and remote work culture is creating new opportunities for the adversary. Brave new world, huh?