Artificial intelligence -- for good or evil?

AI security

AI is popping up in all sorts of things at the moment, but what happens when it goes wrong or is used for questionable purposes?

A new report from Malwarebytes Labs looks at how AI is being used, with a particular emphasis on cybersecurity, and at the concerns that are growing surrounding its use.

For the security industry, AI has the ability to process large volumes of data and therefore help combat the growing number of new malware variants being deployed every day. This can help with smart detections and freeing up researchers to focus on deeper analysis of more interesting, new threats.

There are concerns, however, that malware authors could subvert AI-enhanced security platforms in order to trick detectors into incorrectly identifying threats or turning out false positives, in the process damaging the vendor's reputation in the market.

Another worry in the use AI and machine learning in in social engineering, particularly in regard to fake news and 'DeepFakes' where fake videos are created using footage of real people.

"Knowing what is actually a legitimate thing is going to become harder and harder to determine if you're going by what information is available over the internet," says Adam Kujawa, director of Malwarebytes Labs. "That's a huge problem for us as a species, to be able to confirm whether something is real or not, and the technology to do that at the moment basically doesn't exist. We're walking into a world of fake news and DeepFakes, questioning the reality of pretty much everything with no tools to combat that, at least not yet."

Other ways AI could be used by threat actors include Captcha solving, which is already trivial for machine learning; social media scanning, looking for people associated with organizations, which helps in getting intelligence for more effective spear phishing campaigns; and creating more convincing spam that can be trained to adapt to the receiver.

Currently there is no evidence of AI-based malware being used in the wild, but Kujawa believes it's only a matter of time, "About three to five years from now we may start seeing AI-controlled malware on the endpoint. It's less likely right now, mainly because of hardware constraints, so it's not there yet but it will be. Researchers at IBM created their own AI malware called DeepLocker which hides in an application and waits for particular conditions to be met and then launches."

You can read more in the full report which is available from the Malwarebytes site.

Image Credit: limbi007 / depositphotos.com

2 Responses to Artificial intelligence -- for good or evil?

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.