Unit 42 outlines how Agentic AI capabilities can be leveraged by attackers to increase the speed of attacks 100x
By: Islam Tawfik
The integration of AI into adversarial operations is fundamentally reshaping the speed, scale and sophistication of attacks. As AI defense capabilities evolve, so do the AI strategies and tools leveraged by threat actors, creating a rapidly shifting threat landscape that outpaces traditional detection and response methods. This accelerating evolution necessitates a critical examination for CXOs into how threat actors will strategically weaponize AI across each phase of the attack chain.
One of the most alarming shifts we have seen, following the introduction of AI technologies, is the dramatic drop in mean time to exfiltrate (MTTE) data, following initial access. In 2021, the average MTTE stood at nine days. According to our Unit 42 2025 Global Incident Response Report, by 2024 MTTE dropped to two days. In one in five cases, the time from compromise to exfiltration was less than 1 hour.
In our testing, Unit 42 was able to simulate a ransomware attack (from initial compromise to data exfiltration) in just 25 minutes using AI at every stage of the attack chain. That’s a 100x increase in speed, powered entirely by AI.
Recent threat activity observed by Unit 42 has highlighted how adversaries are leveraging AI in attacks:
• Deepfake-enabled social engineering has been observed in campaigns from groups like Muddled Libra (also known as Scattered Spider), who have used AI-generated audio and video to impersonate employees during help desk scams.
• North Korean IT workers are using real-time deepfake technology to infiltrate organizations through remote work positions, which poses significant security, legal and compliance risks.
• Attackers are leveraging generative AI to conduct ransomware negotiations, breaking down language barriers and more effectively negotiating higher ransom payments.
• AI-powered productivity assistants are being used to identify sensitive credentials in victim environments.
A significant evolution is the emergence of Agentic AI – autonomous systems capable of making decisions, learning from outcomes, problem solving and iteratively improving their performance without human intervention. These systems have the potential to independently execute multistep operations, from identifying targets to adapting tactics midattack. This makes them especially dangerous. As agentic models become more accessible, you can expect a surge in automated, self-directed cyberattacks that are faster, more adaptive and increasingly difficult to contain.
Palo Alto Networks Unit 42 has been researching and developing an Agentic AI Attack framework that demonstrates how these capabilities can execute attacks with minimal input from the attacker.
Through our research, we are able to demonstrate just how easily this technology could be turned against enterprises and execute attacks with unprecedented speed and scale. Over time, Unit 42 will integrate these capabilities into our purple teaming exercises, so you can test and improve your organization’s defenses against Agentic AI attacks.
The emergence of Agentic AI is not just a theoretical risk; it’s an accelerating reality that will challenge how your organization approaches threat detection, response and mitigation.