Agentic AI: The Autonomous Evolution of Cybercrime
Image: Joseph Merton/Imagen

Agentic AI: The Autonomous Evolution of Cybercrime

Cybersecurity is entering a new era as agentic AI replaces AI-assisted tools with autonomous systems capable of executing multi-stage attacks at machine speed. Reinforcement Learning (RL) drives this evolution, enabling agents to discover vulnerabilities, deploy exploits, and reach complex objectives without human oversight. Deep Q-Networks and other RL models form the technical backbone of this shift. Nation-state actors and organized crime syndicates have already begun integrating these capabilities, supported by threat intelligence from 2024–2025.

To counter autonomous threats, organizations must upgrade their defense strategies. AI-powered platforms, Zero Trust architectures, and deception technologies form the core of a resilient response. Enterprise leaders and policymakers must act decisively to prepare for an era of cyber conflict dominated by intelligent machines, where only machine-speed defense can match machine-speed attacks.


I. Introduction: The Dawn of Agentic Warfare

Cybersecurity is undergoing a pivotal shift from human-led operations to autonomous, goal-driven systems known as agentic AI—intelligent agents capable of perceiving, deciding, and executing multi-stage tasks without human input [1]. These agents surpass generative AI’s ability to create malware by automating the entire attack lifecycle, transforming cyber conflict into machine-speed engagements.

Researchers at Unit 42 revealed that AI agents can execute a complete ransomware kill-chain in just 25 minutes [2], compared to the average breach lifecycle of 277 days [3]. This compression eliminates human response windows and establishes a new battlefield where autonomous systems compete in microseconds [4].

Agentic AI redefines cybersecurity by enabling autonomous systems to conduct and defend against cyberattacks at unprecedented speed.

Nation-states and criminal networks have begun deploying agentic systems for espionage and scaled attacks. The U.S. Department of Defense, through initiatives like Project Thunderforge, views autonomous capabilities as battlefield necessities [5]. This convergence signals that agentic warfare is not speculative—it is already reshaping the threat landscape.


II. The Technology of Autonomous Attacks

The leap from AI-assisted to fully autonomous attacks is predicated on a specific set of technologies that enable agents to learn, adapt, and act independently. Understanding this technical foundation is critical to appreciating the nature of the threat and developing effective countermeasures. This section examines the core machine learning models and architectural concepts that define agentic warfare.

A. Reinforcement Learning: The Engine of Autonomy

Reinforcement Learning (RL) enables agents to learn optimal strategies through trial and error [6]. Unlike supervised learning, RL uses environmental feedback—rewards and penalties—to train agents toward high-level goals, such as compromising a system.

Models like Deep Q-Networks (DQN) and Proximal Policy Optimization (PPO) support offensive and defensive operations, enabling agents to simulate threats or maintain stability during training. RL-powered agents can fully automate penetration testing by integrating with frameworks like Metasploit, identifying optimal exploits, and accelerating zero-day development.

B. The Autonomous Attack Chain

Agentic AI enables the full automation of the cyber attack chain, with specialized agents collaborating to execute each phase of an operation. This modular approach, where different agents handle specific tasks like reconnaissance, initial access, and execution, allows for highly efficient and adaptive campaigns that can proceed with minimal human oversight. This represents a fundamental shift from a linear, human-driven process to a dynamic, self-directed one.

A real-world demonstration by Palo Alto Networks' Unit 42 vividly illustrates this concept. They developed an agentic framework where multiple AI agents, each with a specific function, worked in tandem to execute a full ransomware campaign in just 25 minutes [3]. This automated kill chain highlights the potential for adversaries to operate at a pace that overwhelms conventional defenses. The stages of such an attack are no longer discrete human actions but a fluid sequence of autonomous operations.

Autonomous attacks are powered by reinforcement learning and agent collaboration across the kill chain. Enabling rapid, adaptive, machine-led campaigns—shrinking the patch gap and outpacing traditional defenses.


Article content
Table 1: The Agentic AI Attack Chain

C. Multi-Agent Systems and Emergent Behavior

Multi-Agent Systems (MAS) deploy coordinated swarms to solve problems more efficiently than individual agents [8]. While this approach increases effectiveness, it introduces unpredictable emergent behavior—actions not explicitly programmed, such as exploiting unintended system bugs.

These behaviors raise concerns about control and accountability. Offensive swarms intended for espionage could inadvertently cause infrastructure collapse. Defenders must grapple with managing systems whose actions defy human predictability, escalating risks in machine-speed conflicts.

Multi-Agent Systems offer powerful collaboration but introduce unpredictability through emergent behavior, challenging control and accountability.

III. The Adversaries: Who Wields Agentic AI?

The development and deployment of autonomous attack agents are not uniform across the threat landscape. Different adversary groups, driven by distinct motivations ranging from geopolitical espionage to financial profit, are adopting these technologies at varying paces and for different purposes. Understanding who is using these tools and why is crucial for anticipating future attacks and calibrating defensive strategies accordingly.

A. Nation-State Actors: The Strategic Imperative

State-sponsored Advanced Persistent Threat (APT) groups are at the forefront of AI adoption, viewing it as a critical tool for enhancing espionage, disruption, and intelligence-gathering operations. While current, observable use by APTs from Iran, China, North Korea, and Russia has primarily focused on leveraging generative AI as a productivity tool for tasks like scripting and phishing, this is merely a precursor to the adoption of more autonomous systems. The strategic imperative to maintain an advantage in cyber warfare makes the development of offensive agentic AI a logical and inevitable next step for these sophisticated actors.

Nation-states are rapidly investing in agentic AI for espionage and strategic dominance, making autonomous capabilities a central fixture in the future of cyber conflict.

The interest of nation-states in this domain is underscored by their own defensive investments. The U.S. Department of Defense's Project Thunderforge, for example, is a major initiative to integrate AI-driven planning, simulation, and agent-based wargaming into operational workflows for commands like INDOPACOM and EUCOM. This effort to build autonomous defensive agents signals a clear recognition at the state level that the future of conflict will be fought by and against autonomous systems. It is a strategic certainty that if a nation is developing autonomous defenses, its adversaries are developing autonomous offenses.

For nation-states, agentic AI offers the ability to conduct highly scalable and deniable operations. Autonomous agents could be deployed to persistently probe adversary networks for zero-day vulnerabilities, conduct widespread intelligence gatherings, or execute disruptive attacks on critical infrastructure with a speed and precision that would be impossible for human teams to match. The use of autonomous systems also complicates attribution, providing a layer of plausible deniability that is highly valuable in geopolitical conflicts.


B. Organized Crime: The Industrialization of Exploits

Organized cybercrime syndicates are rapidly adopting agentic AI to industrialize their operations, driven by one overriding factor: return on investment. Autonomous agents enable end-to-end automation across the attack lifecycle, turning cybercrime from a craft into a scalable, high-efficiency enterprise [10].

Ransomware-as-a-Service (RaaS) illustrates this shift. Criminal groups already use AI to generate custom ransomware that adapts tactics in real time [11]. Agentic systems take this further by automating target selection, intrusion, encryption, and ransom negotiation. These agents can identify high-value, financially capable targets and execute attacks autonomously, potentially even deploying AI-generated deepfakes to manipulate victims [11].

Cybercriminals use agentic AI to industrialize attacks for maximum scale and profitability.

Industrialization extends into credential theft and fraud. Bots trained to mimic human behavior now bypass CAPTCHAs and other defenses at a massive scale [12]. Automation reduces operational costs while amplifying attack volume and success rates, creating a compounding cycle where profits are reinvested into building increasingly advanced agentic tools [13].


IV. The Defender’s Dilemma: Countering Machine-Speed Threats

Autonomous attack agents compress cyberattack timelines to seconds, rendering human-led defenses obsolete [3]. To remain viable, defenders must adopt intelligent, automated responses at machine speed.


A. The Autonomous Defense Imperative

Machine-speed attacks demand machine-speed responses. Security platforms powered by AI now detect, investigate, and neutralize threats autonomously, eliminating the delays of human intervention. These systems rely on behavioral analysis to identify malicious actions such as unauthorized file encryption, abnormal network activity, and credential harvesting—patterns that remain detectable despite code obfuscation [14].

Leading vendors have operationalized this concept. SentinelOne integrates EDR, XDR, and SIEM capabilities into a unified platform for autonomous protection across endpoints, cloud workloads, and identities. Darktrace’s “Enterprise Immune System” uses unsupervised machine learning to map baseline network behavior and respond instantly to anomalies. CrowdStrike’s Falcon platform analyzes trillions of events in real time, leveraging proprietary AI models to block hostile patterns. Each system aims to achieve autonomous response, neutralizing threats faster than adversaries can execute them.

Behavioral analysis enables autonomous detection and response, transforming defense into a machine-speed function essential for modern resilience.


Article content
Figure 1: Autonomous Threat Detection and Response

B. Fighting Swarms with Swarms: Defensive Multi-Agent Systems

Defensive Multi-Agent Reinforcement Learning (MARL) supports decentralized teams of agents handling specific security tasks. Training occurs in Cyber Gyms, where blue agents compete with red adversaries to develop adaptive strategies and intelligence sharing.

This swarm-based approach strengthens resilience against coordinated attacks and lateral movement, outperforming centralized defenses and marking a shift toward intelligent collaborative security teams.

Multi-agent reinforcement strategies offer scalable, adaptive protection aligned with the pace and complexity of modern conflict.

C. New Defensive Paradigms: Zero Trust and Deception

Traditional perimeter-based defenses are ineffective against autonomous agents [14]. Zero Trust architectures authenticate and segment access, limiting lateral movement even after initial breaches.

Deception technologies, such as honeypots, divert attackers to fake assets and expose threat tactics. These interactions generate high-quality threat intelligence that strengthens defensive AI models and raises the cost of attacks.


V. Strategic Recommendations and Conclusion

The emergence of agentic AI as a weapon of cyber conflict demands a fundamental reassessment of security strategy, investment, and policy. The threats posed by autonomous systems are not merely technical; they are strategic risks that require a coordinated and forward-looking response from enterprise leaders, security practitioners, and governments alike.

A. Recommendations for Enterprise CISOs

For Chief Information Security Officers (CISOs) and their teams, preparing for the era of agentic warfare requires a decisive pivot away from legacy security models and a focus on building an autonomous, resilient defense.

  1. Invest in Autonomous, AI-Powered Defenses Allocate resources to platforms capable of real-time behavioral analysis and autonomous threat response.
  2. Adopt Zero Trust Architectures Implement identity-driven controls, multifactor authentication, and network segmentation to eliminate implicit trust [14].
  3. Establish Continuous AI Red Teaming Proactively test internal AI systems against data poisoning and evasion. Secure training pipelines and model integrity.
  4. Upskill Teams for Human-AI Collaboration Train analysts to interpret model outputs and manage AI systems. Build hybrid SOCs that pair human strategy with autonomous agility [17].

Autonomous defense requires structural change. CISOs must deploy intelligent platforms, enforce Zero Trust, secure AI infrastructure, and prepare analysts to work alongside autonomous systems.

B. Policy and Regulatory Outlook

The systemic risks posed by autonomous cyber warfare cannot be managed by individual organizations alone. A robust framework of national and international policy is necessary to guide responsible development and mitigate the potential for catastrophic outcomes.

The European Union's AI Act provides a foundational blueprint for such regulation, establishing a risk-based approach that imposes stricter obligations on high-risk AI systems. While not specifically targeting autonomous weapons, its principles of transparency, accountability, and human oversight are directly applicable. For example, its requirements for high-risk systems to be assessed before deployment and for generative AI to disclose its nature are crucial steps. However, regulation must strike a delicate balance. Overly prescriptive rules could stifle the innovation needed to build next-generation defenses, while cybercriminals and hostile nation-states will operate outside these legal frameworks regardless [20].

Therefore, policy should focus on promoting international norms for the use of autonomous systems in conflict, fostering public-private partnerships for threat intelligence sharing, and investing in research on AI safety and control. Establishing clear lines of accountability for the actions of autonomous agents and developing protocols for de-escalation in machine-speed conflicts are among the most pressing challenges for policymakers today.

Autonomous cyberwarfare demands policy reform, global coordination, and a strategic shift in how organizations lead alongside intelligent machines.

C. Conclusion: Preparing for the Future of Autonomous Conflict

The evidence and trends presented in this report converge on an unavoidable conclusion: we are entering an era of autonomous cyber conflict. The current landscape of AI-assisted attacks is merely a prelude to a future dominated by swarms of intelligent agents engaging in offensive and defensive campaigns at speeds that defy human comprehension. This future raises profound strategic and ethical questions that transcend the technical realm of cybersecurity. How do we maintain meaningful human control over systems that can reason and act independently? How do we ensure accountability when the actor is an algorithm? How do leaders prevent catastrophic escalation in a conflict that unfolds in milliseconds?

Cybersecurity is no longer a technical function—it’s a strategic imperative defined by autonomy, speed, and adaptability. Organizations must move beyond AI adoption and lead in a world shaped by intelligent machines.

The age of agentic AI is not a distant hypothetical; its foundations are being laid today in research labs, by commercial enterprises, and on the digital battlefield. The organizations and nations that will thrive in this new era will be those that do more than simply adopt AI as another tool. They must fundamentally reimagine their strategies, architectures, and leadership models to operate in a world where intelligent machines are not just instruments, but teammates and adversaries. The arms race has already begun, and its outcome will be determined not by who possesses AI, but by who understands how to lead alongside it.


References

1.     Reinforcement Machine Learning in Cybersecurity: A Comprehensive Analysis - Medium, accessed July 10, 2025, https://medium.com/@leev574/reinforcement-machine-learning-in-cybersecurity-a-comprehensive-analysis-17c2505c8be0

2.     Unit 42 Develops Agentic AI Attack Framework - Palo Alto Networks, accessed July 10, 2025, https://www.paloaltonetworks.com/blog/2025/05/unit-42-develops-agentic-ai-attack-framework/

3.     Overview of the Most Common AI-Powered Cyber Threats in 2025 ..., accessed July 10, 2025, https://medium.com/waits-on-ai-cybersecurity/overview-of-the-most-common-ai-powered-cyber-threats-in-2025-part-1-3278b440c08b

4.     AI-Powered Cyber Threats in 2025: The Rise of Autonomous Attack Agents and the Collapse of Traditional Defenses | by Chetan Seripally - Medium, accessed July 10, 2025, https://medium.com/@seripallychetan/ai-powered-cyber-threats-in-2025-the-rise-of-autonomous-attack-agents-and-the-collapse-of-ce80a5f05afa

5.     (PDF) Automated Vulnerability Exploitation Using Deep ..., accessed July 10, 2025, https://www.researchgate.net/publication/384880235_Automated_Vulnerability_Exploitation_Using_Deep_Reinforcement_Learning

6.     Cyber security Enhancements with reinforcement learning: A zero-day vulnerabilityu identification perspective - PubMed Central, accessed July 10, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC12111634/

7.     Reinforcement Learning for Automated Cybersecurity Penetration Testing - arXiv, accessed July 10, 2025, https://arxiv.org/html/2507.02969v1

8.     Nation State Backed Groups Using AI for Malicious Purposes | Data ..., accessed July 10, 2025, https://www.dataprivacyandsecurityinsider.com/2025/02/nation-state-backed-groups-using-ai-for-malicious-purposes/

9.     Understanding Generative AI-Based Attacks with MITRE ATLAS | by Tahir | Medium, accessed July 10, 2025, https://medium.com/@tahirbalarabe2/understanding-generative-ai-based-attacks-with-mitre-atlas-a4e3be2a26e6

10.  Adversarial Machine Learning - CLTC UC Berkeley Center for Long-Term Cybersecurity, accessed July 10, 2025, https://cltc.berkeley.edu/aml/

11.  2025 Forecast: AI to supercharge attacks, quantum threats grow, SaaS security woes, accessed July 10, 2025, https://www.scworld.com/feature/cybersecurity-threats-continue-to-evolve-in-2025-driven-by-ai

12.  7 AI Cybersecurity Trends to Watch in 2025 - NextTech Today, accessed July 10, 2025, https://nexttechtoday.com/tech/cyber-security/7-ai-cybersecurity-trends-to-watch-in-2025/

13.  Everything You Need To Know About Cyber Threat Intelligence - Cyble, accessed July 10, 2025, https://cyble.com/knowledge-hub/cyber-threat-intelligence-2025/

14.  www.pwc.com, accessed July 10, 2025, https://www.pwc.com/gx/en/issues/cybersecurity/the-rise-of-autonomous-ai-in-cybersecurity.html#:~:text=Agentic%20AI%20isn't%20just,It's%20about%20reimagining%20what's%20possible.

15.  Reinforcement Learning for Cybersecurity - Insights2TechInfo, accessed July 10, 2025, https://insights2techinfo.com/reinforcement-learning-for-cybersecurity/

16.  Employing Deep Reinforcement Learning to Cyber-Attack Simulation for Enhancing Cybersecurity - MDPI, accessed July 10, 2025, https://www.mdpi.com/2079-9292/13/3/555

17.  AI Malware: Types, Real Life Examples, and Defensive Measures - Perception Point, accessed July 10, 2025, https://perception-point.io/guides/ai-security/ai-malware-types-real-life-examples-defensive-measures/

18.  Should Governments Regulate AI-Powered Cybersecurity Tools? Balancing Innovation and Security - Web Asha Technologies, accessed July 10, 2025, https://www.webasha.com/blog/should-governments-regulate-ai-powered-cybersecurity-tools-balancing-innovation-and-security

19.  AI Emergent Risks Testing: Identifying Unexpected Behaviors Before Deployment - VerityAI, accessed July 10, 2025, https://verityai.co/blog/ai-emergent-risks-testing

20.  Thunderforge Project: Integrating Commercial AI-Powered Decision ..., accessed July 10, 2025, https://www.diu.mil/latest/dius-thunderforge-project-to-integrate-commercial-ai-powered-decision-making

21.  AI is Changing the Cyber Threat Landscape. Here's How to Stay ..., accessed July 10, 2025, https://secureflo.net/ai-is-changing-the-cyber-threat-landscape-heres-how-to-stay-secure-in-2025/

22.  The Dual Role of AI in Cybersecurity - AccessIT Group, accessed July 10, 2025, https://www.accessitgroup.com/the-dual-role-of-ai-in-cybersecurity/

23.  The Dual-Use Nature of AI in Cybersecurity | Women in Tech Network, accessed July 10, 2025, https://www.womentech.net/en-br/how-to/dual-use-nature-ai-in-cybersecurity

24.  The Dual Role of AI in Authentication & Cybersecurity | Userfront, accessed July 10, 2025, https://userfront.com/blog/ai-cybersecurity

25.  The Future of AI-Powered Cyber Attacks and How to Defend Against Them - Devsinc, accessed July 10, 2025, https://www.devsinc.com/articles/the-future-of-ai-powered-cyber-attacks-and-how-to-defend-against-them

26.  The Rise of AI-Enabled Crime: Exploring the evolution, risks, and ..., accessed July 10, 2025, https://www.trmlabs.com/resources/blog/the-rise-of-ai-enabled-crime-exploring-the-evolution-risks-and-responses-to-ai-powered-criminal-enterprises

27.  AI-driven cybercrime is growing, here's how to stop it | World ..., accessed July 10, 2025, https://www.weforum.org/stories/2025/01/how-ai-driven-fraud-challenges-the-global-economy-and-ways-to-combat-it/

28.  7 AI Cybersecurity Trends For The 2025 Cybercrime Landscape - Exploding Topics, accessed July 10, 2025, https://explodingtopics.com/blog/ai-cybersecurity

29.  The Growing Threat of AI-powered Cyberattacks in 2025 - Cyber Defense Magazine, accessed July 10, 2025, https://www.cyberdefensemagazine.com/the-growing-threat-of-ai-powered-cyberattacks-in-2025/

30.  The Future of AI Security: Generative-Discriminator AI (GAN) Networks will revolutionize Cybersecurity - AI Asia Pacific Institute, accessed July 10, 2025, https://aiasiapacific.org/2025/03/17/the-future-of-ai-security-generative-discriminator-ai-gan-networks-will-revolutionize-cybersecurity/

31.  Deepfakes and AI-Powered Phishing Scams - Kount, accessed July 10, 2025, https://kount.com/blog/phishing-has-new-face-its-powered-ai

32.  AI, quantum and the collapse of digital trust - Kyndryl, accessed July 10, 2025, https://www.kyndryl.com/in/en/about-us/news/2025/07/ai-and-geopolitics-in-cybersecurity

33.  Deepfake Attacks & AI-Generated Phishing: 2025 Statistics, accessed July 10, 2025, https://zerothreat.ai/blog/deepfake-and-ai-phishing-statistics

34.  Polymorphic AI Malware: A Real-World POC and Detection ..., accessed July 10, 2025, https://cardinalops.com/blog/polymorphic-ai-malware-detection/

35.  AI-Generated Malware: A Rising Cyber Threat - CyXcel, accessed July 10, 2025, https://www.cyxcel.com/knowledge-hub/ai-generated-malware-a-rising-cyber-threat/

36.  The rise of autonomous AI in cybersecurity | PwC, accessed July 10, 2025, https://www.pwc.com/gx/en/issues/cybersecurity/the-rise-of-autonomous-ai-in-cybersecurity.html

37.  Automated Vulnerability Exploitation Using Deep Reinforcement Learning - MDPI, accessed July 10, 2025, https://www.mdpi.com/2076-3417/14/20/9331

38.  Outpacing the Adversary: Why Autonomous AI is the Future of Modern Warfare, accessed July 10, 2025, https://www.seekr.com/blog/autonomous-ai-the-future-of-modern-warfare/

39.  The Adversarial Misuse of AI: How Threat Actors Are Leveraging AI ..., accessed July 10, 2025, https://socradar.io/adversarial-misuse-of-ai-how-threat-actors-leverage-ai/

40.  Google Report Reveals How Threat Actors Are Currently Using Generative AI - InfoQ, accessed July 10, 2025, https://www.infoq.com/news/2025/03/misuse-generative-ai/

41.  Artificial Intelligence and State-Sponsored Cyber Espionage: The ..., accessed July 10, 2025, https://jipel.law.nyu.edu/artificial-intelligence-and-state-sponsored-cyber-espionage/

42.  Overview of the Most Common AI-Powered Cyber Threats in 2025 — Part 2 - Medium, accessed July 10, 2025, https://medium.com/waits-on-ai-cybersecurity/overview-of-the-most-common-ai-powered-cyber-threats-in-2025-part-2-dbaa7d6cd076

43.  Adversarial AI: Understanding and Mitigating the Threat - Sysdig, accessed July 10, 2025, https://sysdig.com/learn-cloud-native/adversarial-ai-understanding-and-mitigating-the-threat/

44.  Part 1: Navigating the Threat of Evasion Attacks in AI | by Anya Kondamani - Medium, accessed July 10, 2025, https://medium.com/nfactor-technologies/part-1-navigating-the-threat-of-evasion-attacks-in-ai-4d7ea9831143

45.  6 Key Adversarial Attacks and Their Consequences - Mindgard AI, accessed July 10, 2025, https://mindgard.ai/blog/ai-under-attack-six-key-adversarial-attacks-and-their-consequences

46.  AI Evasion: The Next Frontier of Malware Techniques - Check Point Blog, accessed July 10, 2025, https://blog.checkpoint.com/artificial-intelligence/ai-evasion-the-next-frontier-of-malware-techniques/

47.  MITRE ATLAS™, accessed July 10, 2025, https://atlas.mitre.org/

48.  Anatomy of an AI ATTACK: MITRE ATLAS - IBM Mediacenter, accessed July 10, 2025, https://mediacenter.ibm.com/media/Anatomy+of+an+AI+ATTACKA+MITRE+ATLAS/1_3kronw1x

49.  How to Detect Threats to AI Systems with MITRE ATLAS Framework - ChaosSearch, accessed July 10, 2025, https://www.chaossearch.io/blog/mlops-monitoring-mitre-atlas

50.  Practical use of MITRE ATLAS framework for CISO teams - RiskInsight, accessed July 10, 2025, https://www.riskinsight-wavestone.com/en/2024/11/practical-use-of-mitre-atlas-framework-for-ciso-teams/

51.  Darktrace | The Essential AI Cybersecurity Platform, accessed July 10, 2025, https://www.darktrace.com/

52.  Top 10 AI Cybersecurity Tools for Protecting Customer Data: A 2025 Review and Comparison - SuperAGI, accessed July 10, 2025, https://superagi.com/top-10-ai-cybersecurity-tools-for-protecting-customer-data-a-2025-review-and-comparison/

53.  SentinelOne | AI-Powered Enterprise Cybersecurity Platform, accessed July 10, 2025, https://www.sentinelone.com/

54.  How AI-Powered Malware Is Evading Traditional Firewalls - NetworkTigers News, accessed July 10, 2025, https://news.networktigers.com/cloud-chronicles/how-ai-powered-malware-is-evading-traditional-firewalls/

55.  AI-Driven Threat Detection: Revolutionizing Cyber Defense - Zscaler, accessed July 10, 2025, https://www.zscaler.com/blogs/product-insights/ai-driven-threat-detection-revolutionizing-cyber-defense

56.  AI Threat Detection: How It Works & 6 Real-World Applications - Oligo Security, accessed July 10, 2025, https://www.oligo.security/academy/ai-threat-detection-how-it-works-6-real-world-applications

57.  EU AI Act: first regulation on artificial intelligence | Topics | European ..., accessed July 10, 2025, https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

58.  AI Regulations and LLM Regulations: Past, Present, and Future | Exabeam, accessed July 10, 2025, https://www.exabeam.com/explainers/ai-cyber-security/ai-regulations-and-llm-regulations-past-present-and-future/

59.  AI Cyber Attacks Statistics 2025: Attacks, Deepfakes, Ransomware ..., accessed July 10, 2025, https://sqmagazine.co.uk/ai-cyber-attacks-statistics/

60.  Reinforcement Learning for Automated Cybersecurity Penetration Testing - arXiv, accessed July 10, 2025, http://www.arxiv.org/pdf/2507.02969

61.  Redefining Cybersecurity In The Age Of Autonomous Agents, accessed July 10, 2025, https://cybersecurityventures.com/redefining-cybersecurity-in-the-age-of-autonomous-agents/

62.  Hierarchical Multi-agent Reinforcement Learning for Cyber Network Defense - OpenReview, accessed July 10, 2025, https://openreview.net/pdf?id=ew58AyvrlH

63.  Multi-Agent Reinforcement Learning in Cybersecurity: From Fundamentals to Applications This research is supported by armasuisse Science and Technology. - arXiv, accessed July 10, 2025, https://arxiv.org/html/2505.19837v1

64.  The Blind Spots of Multi-Agent Systems: Why AI Collaboration ..., accessed July 10, 2025, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/the-blind-spots-of-multi-agent-systems-why-ai-collaboration-needs-caution/

65.  "Magical" Emergent Behaviours in AI: A Security Perspective, accessed July 10, 2025, https://securing.ai/ai-security/emergent-behaviors-ai-security/

66.  Autonomous AI Systems in Conflict: Emergent Behavior and Its ..., accessed July 10, 2025, https://www.tandfonline.com/doi/full/10.1080/15027570.2023.2213985

67.  Autonomous AI Systems in Conflict: Emergent Behavior and Its Impact on Predictability and Reliability - Taylor & Francis Online, accessed July 10, 2025, https://www.tandfonline.com/doi/abs/10.1080/15027570.2023.2213985

68.  Thunderforge Initiative: AI-Driven War Gaming for Strategic Command - PyLessons.com, accessed July 10, 2025, https://pylessons.com/news/thunderforge-initiative-ai-war-gaming-europe-asia


Thanks for sharing, Joseph

Like
Reply

To view or add a comment, sign in

More articles by Joseph Merton

Others also viewed

Explore content categories