Technology

Cybercriminals Complain About AI-Generated Content Flooding Underground Forums

University of Edinburgh research analyzing nearly 98,000 cybercrime forum conversations reveals criminals have shifted from AI enthusiasm to skepticism due to low-quality content flooding their platfo

Martin HollowayPublished 12h ago6 min readBased on 4 sources
Reading level
Cybercriminals Complain About AI-Generated Content Flooding Underground Forums

Cybercriminals Complain About AI-Generated Content Flooding Underground Forums

Ben Collier from the University of Edinburgh has documented a shift in cybercriminal attitudes toward artificial intelligence following analysis of 97,895 AI-related conversations on underground forums since ChatGPT's launch in 2022. The research reveals that low-level cybercriminals have moved from initial enthusiasm about AI tools to widespread skepticism, primarily due to low-quality AI-generated content saturating their discussion spaces.

The findings, published in Wired, indicate that forum participants increasingly express dissatisfaction with what researchers term "AI slop" — generic, unhelpful posts that crowd out legitimate technical discussions. Some users have explicitly stated preferences for human interaction over AI-generated responses, citing quality concerns about automated content.

Limited Impact on Core Criminal Operations

Despite widespread availability of AI coding assistants and generative tools, the research suggests these technologies have not significantly disrupted established business models within cybercrime ecosystems. AI coding assistants appear most useful to already skilled practitioners rather than lowering barriers to entry for newcomers — a pattern that mirrors legitimate software development environments where AI augments rather than replaces expertise.

Collier's research indicates that while cybercriminals experiment with AI tools, the technologies are not delivering substantial benefits to their core operational workflows. This contrasts sharply with early predictions that AI would democratize cybercrime by enabling less technically proficient actors to execute sophisticated attacks.

Documented Criminal Applications

German law enforcement data provides concrete examples of how criminal actors do leverage AI capabilities. The Federal Criminal Police Office (BKA) has documented the use of AI-generated images for forging identity documents and deepfake technology to bypass biometric verification systems. Criminal groups also employ generative AI for programming assistance and code troubleshooting, though this appears limited to enhancing existing technical capabilities rather than enabling entirely new attack vectors.

The most sophisticated criminal organizations demonstrate strategic thinking about AI adoption, according to research from the Turing Institute. Well-organized groups display innovation in technology adoption, with leadership exercising deliberate decision-making about which AI capabilities align with revenue generation goals.

Future Risk Vectors

Looking beyond current criminal applications reveals more concerning possibilities. Researchers identify poorly secured agentic AI systems — those capable of autonomous action — as the most pressing near-term cybersecurity risk. These systems could enable new categories of attacks that leverage AI's capacity to generate convincing content at scale while exploiting human psychological vulnerabilities.

The ability to create persuasive generated content for social engineering attacks represents a qualitative shift from traditional cybercrime techniques. Rather than requiring individual criminals to craft convincing phishing emails or fraudulent communications, AI systems could potentially automate persuasive content generation across multiple languages and cultural contexts simultaneously.

Historical Perspective

This pattern of initial hype followed by practical disappointment echoes technology adoption cycles I've observed repeatedly over three decades of technology coverage. The blockchain enthusiasm of 2017-2018 followed a similar trajectory, with criminal actors initially viewing distributed ledgers as revolutionary tools for money laundering and anonymous transactions. Reality proved more complex, as established financial crime networks found traditional methods often more reliable and less traceable than cryptocurrency transactions.

The cybercriminal response to AI mirrors broader enterprise adoption patterns where transformative technologies require significant organizational changes to deliver promised benefits. Criminal organizations, despite their illicit nature, face similar adoption challenges around training, integration, and workflow modification that constrain legitimate businesses.

Enterprise Security Implications

For security professionals, these findings suggest AI-driven threat evolution may proceed more gradually than apocalyptic scenarios suggest. Current evidence indicates criminal AI adoption focuses on incremental improvements to existing techniques rather than fundamental operational shifts. This provides time for defensive measures to evolve alongside offensive capabilities.

However, the research also highlights the asymmetric nature of AI-enabled attacks. While individual criminals struggle to extract value from AI tools, well-resourced groups with strategic leadership may achieve breakthrough applications that smaller actors cannot replicate. This concentration effect could widen capability gaps between sophisticated and amateur criminal operators.

The documented use of deepfakes for biometric bypass and AI-generated documents for identity fraud represents concrete attack vectors that security teams must address immediately. These applications require minimal technical sophistication while potentially undermining existing verification systems.

Implications for Defense

Understanding cybercriminal AI adoption patterns provides strategic insight for defensive planning. The current focus on content generation and programming assistance suggests attackers remain constrained by human oversight requirements — AI serves as a force multiplier rather than a replacement for human judgment and expertise.

This human-in-the-loop model creates intervention opportunities for security systems designed to detect AI-generated content or identify automated behavior patterns. Additionally, the criminal community's own complaints about AI content quality may indicate detection strategies based on linguistic analysis or content authenticity verification.

The broader trajectory suggests defensive strategies should prioritize securing AI systems against misuse rather than preparing for immediate wholesale transformation of the threat landscape. As criminal organizations experiment with AI capabilities, security teams have a window to develop countermeasures and detection mechanisms before widespread adoption occurs.