An alarming watershed for artificial intelligence or an overhyped threat?
AI startup Anthropic’s recent announcement that it had detected the world’s first artificial intelligence-driven hacking campaign prompted a lot of reaction from cybersecurity experts.
Recommended stories
list of 4 itemsend of list
While some observers are sounding the alarm about the arrival of a long-feared and dangerous inflection point, others are skeptical of the claims, arguing that the startup’s account omits key details and raises more questions than answers.
Anthropic said in a report Friday that its assistant Claude Code was manipulated to carry out 80 to 90 percent of the “large-scale” and “highly sophisticated” cyberattacks, with human intervention required “only sporadically.”
Anthropic, the developer of the popular chatbot Claude, said the attacks were aimed at infiltrating government agencies, financial institutions, high-tech companies and chemical manufacturers, but were only successful in a small number of cases.
The San Francisco-based company blamed the attack on Chinese state-sponsored hackers, but did not say how it uncovered the operation or identify the “approximately” 30 organizations it said were targeted.
Roman V. Yampolsky, an expert on AI and cybersecurity at the University of Louisville, said that while it is difficult to confirm the exact details of Anthropic’s account, there is no question that AI-powered hacking poses a serious threat.
“Modern models can create and adapt exploit code, sift through large amounts of stolen data, and adjust tools faster and cheaper than human teams,” Yampolsky told Al Jazeera.
“This lowers the skill barrier to entry and increases the scale at which well-resourced actors can operate. We are effectively putting junior cyber operations teams in the cloud that can be rented by the hour.”
Yampolsky said he expects AI to increase both the frequency and severity of attacks.
Jaime Sevilla, director of Epoch AI, said that while he doesn’t see much new in Anthropic’s report, past experience shows that AI-assisted attacks are viable and likely to become increasingly common.
“Small-sized businesses and government institutions are likely to be hit hardest by this,” Sevilla told Al Jazeera.
“Historically, these companies have not been targets worthy enough for dedicated campaigns and have often lacked investment in cybersecurity, but AI has made them lucrative targets. We expect many of these organizations to adapt by hiring cybersecurity experts, launching vulnerability bounty programs, and using AI to find and patch internal weaknesses.”
While many analysts have expressed a desire for more information from Anthropic, others deny Anthropic’s claims.
After U.S. Sen. Chris Murphy warned that AI-driven attacks will “destroy us” if regulation isn’t made a priority, MetaAI’s chief scientist Yann LeCun accused him of “toying” with companies seeking to co-opt regulators.
“They’re scaring everyone with questionable research so the open source model will be regulated and cease to exist,” LeCun said in a post on X.
Anthropic did not respond to a request for comment.
A spokesperson for the Chinese embassy in Washington, D.C., said China “consistently and resolutely” opposes all forms of cyberattacks.
Liu Pengyu told Al Jazeera: “We hope that the relevant parties will clarify the characteristics of the cyber incident based on sufficient evidence, rather than baseless speculation and accusations, and behave in a professional and responsible manner.”
Toby Murray, a computer security expert at the University of Melbourne, said Anthropic had a business incentive to highlight both the danger of such attacks and the ability to counter them.
“Some have questioned Anthropic’s claims suggesting that the attackers were able to have the Claude AI perform very complex tasks without much of the human oversight that would normally be required,” Murray told Al Jazeera.
“Unfortunately, we do not have hard evidence to tell us exactly what kind of work was done or what kind of oversight was carried out. Therefore, it is difficult to pass judgment either way on these claims.”
Still, Murray said he didn’t find the report particularly surprising considering how effective some AI assistants are at tasks like coding.
“I don’t think AI-enabled hacking will change the types of hacks that occur,” he said.
“But it could cause a change in scale. We should expect to see more AI-powered hacks in the future, and those hacks to be even more successful.”
While AI increases risks to cybersecurity, it will also be critical to strengthening defenses, analysts say.
Fred Heiding, a Harvard University researcher who specializes in computer security and AI security, said he believes AI will bring “huge advantages” to cybersecurity professionals in the long term.
“Today, many cyber operations are held back due to the lack of human cyber experts. AI can help overcome this bottleneck by allowing all systems to be tested at scale,” Haiding told Al Jazeera.
Heiding, who said Anthropic’s account was widely trusted but “overstated,” said the big danger is that it gives hackers an opportunity to run amok as security experts struggle to keep up with increasingly sophisticated AI abuses.
“Unfortunately, the defense community is likely too slow to adopt new technologies for automated security testing and patching solutions,” he said.
“If that were the case, attackers would be able to wreak havoc on the system with the push of a button before defenders could catch up.”
