Cybercriminals gave AI a go — and came away disappointed, study finds
New research from the University of Edinburgh found that hackers had little success in using AI tools in their work, either directly in their scams or in developing more effective tools.
Cybercriminals are having a hard time incorporating artificial intelligence (AI) in their work, a new analysis found.
A new pre-print study from the University of Edinburgh analysed over 100 million forum posts from cyber criminals using the database CrimeBB, which scrapes data from underground forums.
The data was analysed both manually and by using a large language model (LLM).
While cybercriminals have expressed interest in learning how to use AI tools, the technology has not significantly changed their way of “working,” the study found.
“Many of the reviews and discussions describe [AI] tools as not particularly useful,” the study reads.
Researchers found “no significant evidence” that hackers had any success using AI in improving their hacking activity, either as a learning aid or in developing more effective tools.
AI coding assistants are mostly useful for those who are already skilled at coding, so AI models that offer coding help fail to give them any significant “bump” when trying to break into devices or find security workarounds, the study added.
Related“You’ve gotta first learn the ropes of programming by yourself before you can use AI and ACTUALLY benefit from it,” one post quoted by the study reads.
The main impact AI has had so far on less-than-legal online activity is in easy-to-automate areas, such as social media bot creation, some romance scams and search engine optimisation (SEO) fraud, or the creation of fake websites that get pushed up in search result rankings to make money from advertising.
Reviews suggest that even the most experienced hackers use chatbots to answer coding questions or generate “cheatsheets” to help them code.
The AI that actually has been falls under “mainstream and legitimate products,” such as Anthropic’s Claude or OpenAI’s Codex, rather than specific cybercrime-specific AI models such as WormGPT that hackers designed to produce malware code or phishing emails
Many of the posts analysed by the study are about cyber criminals asking for techniques to bypass the security regulations on those mainstream models, but they seem to have a hard time getting the AI systems to override their safety settings.
Instead, cyber criminals are forced to pivot to older, lower quality open-source AI models that are easier to jailbreak. They tend to be less useful and “require significant resources,” the researchers found.
Their study suggests that the guardrails put in place by AI companies are working — so far.
Go to accessibility shortcuts Share CommentsRead more
With NATO in disarray, what’s next for European security?
Схожі новини