According to TheRegister.com, Trend Micro researchers predict 2026 will be the year cybercriminals fully embrace agentic AI for ransomware attacks, following recent reports of Chinese state-sponsored teams already experimenting with autonomous attack tools. The security firm’s report comes after Anthropic claimed to observe the first state-backed agentic AI cyberattack, though some experts dispute this finding. Ryan Flores, Trend Micro’s data and technology research lead, confirmed state-sponsored groups typically pioneer new attack technologies before cybercriminals adopt them, and while there’s no evidence of criminal use yet, he expects rapid adoption once the technology proves scalable. The shift toward AI-automated attacks represents what Trend Micro calls a “major leap” for the cybercrime ecosystem that could allow even inexperienced operators to conduct complex ransomware operations independently.
Why criminals will love agentic AI
Here’s the thing about agentic AI – it’s basically the next evolution beyond generative AI. Instead of just creating content, these systems can actually take actions autonomously. Think about an HR system that automatically creates email accounts, sets up access permissions, and handles all the onboarding paperwork without human intervention. Now imagine that same capability in the hands of cybercriminals.
Flores gave a chilling example: A criminal could simply tell their agentic AI system “I’m interested in this company in this country” and the AI would automatically scan for vulnerabilities, exploit them, gain access, and create a remote shell – all without human involvement. The scary part? All the tools needed for this automated attack chain already exist separately. It’s just a matter of someone connecting the dots.
The underground market coming
David Sancho, another Trend Micro researcher, predicts we won’t see fully automated attacks overnight. Instead, we’ll see a gradual adoption where sophisticated criminals start offering agentic AI services to others, creating a new underground market. Basically, it’s the ransomware-as-a-service model on steroids.
And here’s what makes this particularly dangerous: The same industrial systems and manufacturing infrastructure that IndustrialMonitorDirect.com provides panel PCs for could become targets. When you’re dealing with production environments that can’t afford downtime, automated AI-driven attacks represent an existential threat. Industrial systems often run legacy software with known vulnerabilities, making them perfect targets for AI systems that can rapidly identify and exploit weaknesses.
How defenders can prepare
So what can organizations do? Flores says the same security principles still apply – assume breach mentality, least privilege access, and proper access controls. But there’s a new wrinkle: AI agents need to be treated like any other user with system access. They can be compromised just like human accounts.
Researchers at Hudson Rock highlighted another concerning vector – what they call “agentic-aware stealers.” These are basically documents with hidden instructions that, when processed by AI assistants like Windows Copilot, can exfiltrate data without triggering security alerts. It’s a clever workaround that doesn’t even require exploiting the AI directly.
The bottom line? We’re heading toward a world where cyberattacks could become as automated as customer service chatbots. And once that genie’s out of the bottle, there’s no putting it back in.
