The impact of evolving AI in cybercrime [Q&A]

Artificial intelligence (AI) has been an evolving trend at the very center of cybersecurity in recent years. However, the release of a wave of new tools such as ChatGPT and Microsoft's Jasper chatbot have sparked fresh concerns about the potential for cybercriminals to leverage increasingly sophisticated technologies for nefarious purposes.

We spoke to Zach Fleming, principal architect at Integrity360, to explore whether AI can be used to create sophisticated malware and hacking tools capable of bringing down entire networks. We'll consider which concerns are valid by highlighting the current state of AI, and we'll explore how security teams can best combat the use of AI in cybercrime.

BN: What are some common cybersecurity concerns surrounding the impact of evolving AI?

ZF: From writing complex essays to producing operational code, new AI tools such as ChatGPT, Google Bard and Bing have truly broken new ground, completing a range of tasks in a highly sophisticated manner.

Naturally, however, there are now worries that threat actors may abuse those tools -- and their ability to harness huge amounts of data and expediting processes in particular -- for nefarious means.

Specifically, we're hearing several concerns around the potential for AI to be used to generate malicious code and create even more dangerous malware.

BN: Are these concerns valid?

ZF: We must remember that most cyberattacks aren't technically advanced, instead carried out by amateur 'script kiddies' who rely on repeatable scripts and pre-built tools.

Of course, there are some more sophisticated attackers capable of uncovering novel vulnerabilities and developing tailored attack methods to effectively exploit them. However, the truth of the matter is that current AI tools aren’t advanced enough to create new strains of malware that are more dangerous than those we’re currently facing.

Despite advances, AI tools are still limited in several ways. They struggle to navigate situations where there isn't a single definitive answer; they are limited by the data they ingest; and they still require the support of experienced individuals to work effectively.

We still need a combination of technology and skilled people in cybersecurity, and this is no different in cybercrime. Therefore, AI models alone won't suddenly facilitate the creation of the kind of malware that people fear.

BN: What are the genuine threats of AI that people should be aware of?

ZF: This isn't to say there is absolutely nothing to worry about. Indeed, there is the potential for ChatGPT and similar tools to further democratize cybercrime.
Phishing-as-a-service (RaaS) providers have been enabling attackers with limited-to-no technical skills to carry out attacks for some time through the use of pre-built toolkits. And with new AI platforms being publicly available, there is the potential that they may exacerbate this issue.

As an example, attackers could use ChatGPT to write text impersonating a trusted source more convincingly as they carry out spear-phishing attacks.

BN: How should cybersecurity professionals look to respond to advances?

ZF: There are reasons for concern. However, current AI tools are not sophisticated enough to create advanced malware capable of evading detection and causing serious damage in their current state.

We're not saying that AI won't play a more serious role in cybercrime in the future. But at present, the threat posed by AI remains largely theoretical.
Now is not the time to panic. By combining effective security tools with trained professionals and supplementary programs such as employee cyber education initiatives, networks and systems can be protected against most AI-led attacks.

Typically, I would advise organizations to get ahead of the game by embracing AI and machine learning in their own defenses. This will improve familiarity, and such tools are very effective at helping to identify and respond to threats such as malicious behaviors in a network at speed.
By working with a trusted cybersecurity partner, AI-led cybersecurity tools can be implemented and optimized with ease.

Image credit: billiondigital/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.