Fear and loathing in artificial intelligence
Artificial intelligence inspires intrigue, fear and confusion in equal measure. But to thrive in the new era, organizations need to reduce the risks posed by AI and make the most of the opportunities it offers.
This is the conclusion of a new report from the Information Security Forum aimed at helping business and security leaders to better understand what AI is, identify the information risks posed and how to mitigate them, and explore opportunities around using AI in defense.
"AI is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior -- and like humans they will be flawed, but also capable of achieving great things. AI poses new information risks and makes some existing ones more dangerous," says Steve Durbin, managing director of the ISF. "However, it can also be used for good and should become a key part of every organization's defensive arsenal. Business and information security leaders alike must understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business."
As AI systems are more widely adopted by organizations, they will become increasingly critical to day-to-day business operations. But these systems have inherent vulnerabilities and are at risk from both accidental and adversarial threats. Compromised AI systems can make poor decisions and produce unexpected outcomes.
At the same time there's the risk of sophisticated AI-enabled attacks -- which have the potential to compromise information and cause severe business impact at a greater speed and scale than ever before. Taking steps both to secure internal AI systems and defend against external AI-enabled threats will become vitally important in reducing information risk. Organizations need to be ready to adapt their defenses in order to cope with the scale and sophistication of AI-enabled cyber attacks.
"As early adopters of defensive AI get to grips with a new way of working, they are seeing the benefits in terms of the ability to more easily counter existing threats. However, an arms race is developing. AI tools and techniques that can be used in defense are also available to malicious actors including criminals, hacktivists and state-sponsored groups," adds Durbin. "Sooner rather than later these adversaries will find ways to use AI to create completely new threats such as intelligent malware – and at that point, defensive AI will not just be a ‘nice to have’. It will be a necessity. Security practitioners using traditional controls will not be able to cope with the speed, volume and sophistication of attacks."
The report Demystifying Artificial Intelligence in Information Security is available now to ISF Member companies via its website.