Cybercriminals gave AI a go — and came away disappointed, study finds

Cybercriminals gave AI a go — and came away disappointed, study finds

Analysis of 100 million forum posts reveals limited adoption of AI tools in hacking activities

Cybercriminals gave AI a go – A new analysis conducted by researchers at the University of Edinburgh suggests that cybercriminals are struggling to effectively integrate artificial intelligence (AI) into their operations, despite expressing enthusiasm for its potential. The study, a pre-print paper published on the CrimeBB database, examined over 100 million forum posts from underground hacking communities. CrimeBB, a platform that aggregates data from various cybercrime forums, served as the primary source of information. By analyzing this data manually and using a large language model (LLM), the team uncovered patterns in how hackers perceive and utilize AI tools.

The research highlights that while cybercriminals have shown interest in AI, the technology has not fundamentally altered their methods of conducting attacks. The findings indicate that the majority of discussions and reviews within the hacking community describe AI tools as lacking practical value in their day-to-day activities. “Many of the reviews and discussions describe [AI] tools as not particularly useful,” the study emphasizes, suggesting that the enthusiasm for AI may be outpaced by its real-world limitations.

“Many of the reviews and discussions describe [AI] tools as not particularly useful.”

According to the analysis, there is no significant evidence that AI has improved the efficiency or effectiveness of hacking activities. Whether used as a learning aid or to develop more advanced tools, cybercriminals have not yet demonstrated a transformative impact from AI integration. The study adds that AI coding assistants, while helpful for experienced coders, fail to provide a meaningful advantage to those attempting to exploit systems or create security loopholes. “You’ve gotta first learn the ropes of programming by yourself before you can use AI and ACTUALLY benefit from it,” one hacker’s post quoted in the study states, underscoring the importance of foundational skills before leveraging AI.

Areas of AI impact remain limited to automation-friendly tasks

Despite the limited success in enhancing hacking techniques, the study identifies certain areas where AI has made a noticeable difference. These include tasks that are easily automated, such as generating social media bots, orchestrating romance scams, and executing search engine optimisation (SEO) fraud. Additionally, AI has been used to create fake websites designed to manipulate search result rankings and generate advertising revenue. These applications align with the nature of AI’s strengths in repetitive and scalable processes, rather than in complex, human-driven cybercrime strategies.

Researchers note that the primary influence of AI in cybercrime so far has been its ability to streamline routine tasks. For instance, bots created with AI can be deployed at scale to impersonate users or spread misinformation, while SEO tools can manipulate algorithms to boost the visibility of fraudulent content. However, these successes are modest compared to the broader potential of AI in more sophisticated attacks. The study suggests that hackers are still reliant on manual techniques in most cases, with AI serving as a supplementary rather than a revolutionary tool.

“AI coding assistants are mostly useful for those who are already skilled at coding, so AI models that offer coding help fail to give them any significant ‘bump’ when trying to break into devices or find security workarounds.”

One of the study’s key observations is that cybercriminals often use mainstream AI products, such as Anthropic’s Claude or OpenAI’s Codex, rather than specialized models like WormGPT. These latter tools are designed explicitly for generating malware code or phishing emails. However, even with access to such models, hackers face challenges in bypassing the built-in safeguards of these systems. The study highlights that many posts discuss attempts to trick mainstream AI models into ignoring their safety protocols, but these efforts have yielded limited results.

As a result, cybercriminals are increasingly turning to older, open-source AI models that are easier to manipulate. While these models may lack the advanced features of their mainstream counterparts, they are more accessible and require fewer resources to exploit. The researchers found that these open-source alternatives, though less reliable, are often preferred by hackers due to their simplicity and the ease with which they can be “jailbroken” to override safety settings. This trend suggests that cybercriminals are still adapting to AI technologies rather than fully mastering them.

“Many of the posts analysed by the study are about cyber criminals asking for techniques to bypass the security regulations on those mainstream models, but they seem to have a hard time getting the AI systems to override their safety settings.”

The study’s implications for cybersecurity are clear: the guardrails implemented by AI companies are proving effective in curbing malicious use. This means that even as hackers explore AI’s capabilities, they are encountering barriers that prevent them from exploiting its full potential. The researchers argue that the current state of AI adoption in cybercrime is more about experimentation than transformation, with most users still relying on traditional methods to execute their plans.

While the findings indicate that AI has not yet revolutionized hacking, they also suggest that the technology’s role in cybercrime is evolving. Cybercriminals are beginning to recognize the limitations of AI in complex tasks and are shifting their focus to areas where automation can be most beneficial. This could signal a new phase in the relationship between hackers and AI, where the former adapts to the latter’s constraints rather than pushing it to its limits. However, the study’s authors caution that the current evidence is limited, and further research will be needed to assess the long-term impact of AI on cybercrime tactics.

The analysis also raises questions about the future of AI in hacking. As the technology advances, will cybercriminals find ways to circumvent its safeguards, or will they continue to struggle with its limitations? The study’s conclusion is that, for now, the barriers to AI adoption in the criminal world are substantial. While hackers are eager to explore AI’s potential, their ability to fully integrate it into their workflows remains constrained. This suggests that AI may not be the silver bullet for cybercrime that some had hoped, but it could still play a role in shaping the future of online attacks in more subtle and targeted ways.

With the increasing prevalence of AI in everyday computing, its influence on cybercriminal activities is likely to grow. However, the University of Edinburgh’s findings indicate that the journey to fully harness AI for malicious purposes is still in its early stages. Until hackers develop more effective strategies to overcome the technology’s inherent limitations, the impact of AI on their operations will remain limited. The study serves as a reminder that while AI offers powerful tools, its utility in the criminal world depends on how well it can be tailored to meet the specific needs of hackers, a challenge that continues to persist.

Emily Garcia

Emily Garcia is a cyber risk analyst focused on risk assessment, cybersecurity training, and human-centric security strategies. She has designed security awareness programs that help companies reduce insider threats and social engineering risks. On CyberSecArmor, Emily writes practical content on phishing prevention, password security, multi-factor authentication (MFA), and cyber hygiene for individuals and organizations. Her goal is to make cybersecurity accessible and actionable for non-technical audiences.

82 article(s) published