Why Cybercriminals Are Frustrated With AI Tools
Research into 98,000 underground forum conversations shows cybercriminals are losing enthusiasm for AI tools, frustrated by low-quality automated content. AI hasn't made cybercrime easier for most act

Why Cybercriminals Are Frustrated With AI Tools
Researchers have found something unexpected: cybercriminals are getting tired of AI. Ben Collier from the University of Edinburgh analyzed nearly 98,000 conversations about AI on underground forums since ChatGPT launched in late 2022. The picture that emerges is one of declining enthusiasm. Criminals initially thought AI tools would be transformative. Instead, many now view them with skepticism.
The main complaint is straightforward. Low-level cybercriminals say that too many AI-generated posts are flooding their forums with generic, unhelpful content — what researchers call "AI slop." These posts crowd out real technical discussions and useful information. Some forum users have explicitly stated they prefer talking to actual humans because AI-generated responses lack quality.
The research, published in Wired, suggests that despite all the hype, AI hasn't actually transformed criminal operations the way many predicted it would.
AI Hasn't Made Cybercrime Easier for Most
Early predictions suggested AI tools would democratize cybercrime — meaning even less skilled criminals could carry out sophisticated attacks. The research indicates this hasn't happened. AI coding assistants do appear useful, but mostly to people who already have strong technical skills. AI augments their work rather than replacing the expertise required.
In other words, the pattern mirrors legitimate software development. Experienced programmers can get real value from AI coding assistants. Beginners still need to know what they're doing. Criminal organizations show the same dynamic: those with existing technical knowledge find some benefit, while amateurs struggle to extract value from AI tools.
Where Criminals Are Actually Using AI
German law enforcement has documented specific criminal applications. The country's Federal Criminal Police Office (BKA) has observed criminals using AI to generate fake identity documents and create deepfakes — synthetic video or images of people — to bypass facial recognition systems. Criminal groups also use AI to help with programming tasks and code troubleshooting.
However, even these applications tend to enhance existing criminal methods rather than create entirely new ones. The most organized criminal groups show deliberate thinking about which AI capabilities align with their business goals, according to research from the Alan Turing Institute. They're making strategic choices, not just adopting every new tool.
What Worries Security Experts Most
Looking ahead reveals more serious concerns. Researchers point to poorly secured AI systems that can act independently — known as agentic AI — as a near-term risk. These systems could enable new attack methods by generating convincing content at massive scale while exploiting how humans think and make decisions.
The real shift would come through automation of social engineering attacks. Traditionally, a cybercriminal has to write convincing phishing emails or fraud messages one at a time. An AI system could potentially do this work continuously across multiple languages and cultural contexts simultaneously. That represents a qualitative change from how cybercrime works today.
A Pattern Worth Considering
The trajectory of cybercriminal enthusiasm followed by practical disappointment echoes patterns I've seen throughout my career covering technology. Blockchain and cryptocurrency went through a similar cycle around 2017 and 2018. Criminal networks initially saw distributed ledgers as revolutionary tools for hiding money transfers and staying anonymous. In practice, traditional methods often proved more reliable and harder to trace than cryptocurrency.
The lesson here is straightforward: criminal organizations face the same real-world adoption challenges that legitimate businesses do. They need to train staff, integrate new tools into workflows, and figure out what genuinely works. None of that is easy, regardless of whether you're running a startup or a criminal enterprise.
What This Means for Security Teams
For cybersecurity professionals, the findings suggest that criminal use of AI may evolve more gradually than worst-case scenarios imagine. The evidence so far indicates criminals are using AI to incrementally improve existing techniques rather than to fundamentally reshape how they operate. That creates some breathing room for defense to catch up.
But there's an important caveat. Larger, well-resourced criminal organizations with good leadership may find breakthrough applications that smaller groups cannot execute. This could widen the capability gap — sophisticated criminals get more sophisticated, while amateurs stay roughly where they are.
The concrete threat vectors are worth taking seriously now. Deepfakes bypassing facial recognition systems and AI-generated documents used in identity fraud are real, happening today, and don't require much technical sophistication. Security teams need defensive measures in place for these specific attacks.
The Defensive Advantage, for Now
Understanding how criminals are struggling with AI adoption suggests that human judgment remains central to criminal operations. AI is a tool that multiplies what humans can do, but it doesn't replace the person making decisions. That limitation creates opportunities for defense.
Security systems can be designed to detect AI-generated content or spot patterns of automated behavior. The fact that cybercriminals themselves complain about low-quality AI posts suggests that linguistic analysis or content authenticity checks might catch AI-generated attacks.
The broader trajectory suggests a strategic focus: securing AI systems themselves against misuse matters more right now than preparing for complete transformation of the threat landscape. Criminal organizations are experimenting with AI capabilities at the margins while defenders have time to develop countermeasures. It's a window that likely won't stay open indefinitely, but it's open now.


