What's keeping security experts up at night?

business security

Some of Europe’s top cybersecurity minds have revealed their fears about the future of technology -- with autonomous weapons at the top of their list.

At a panel entitled "the future of cyber security" at this week’s IP Expo event in London, the threat of self-aware AI that can write sophisticated malware and smart weaponry that could be hijacked by cybercriminals were highlighted as major concerns for the coming years.

The discussion was kicked off by Rik Ferguson, global VP of security research at Trend Micro, who noted that the recent petition to the UN to outlaw autonomous weaponry showed that we are living in dangerous times.

"We are already in Skynet, that is the world we live in," he noted. "I have no doubt attackers will start using AI to build autonomous attack machinery online, as well as physical autonomous weaponry."

Ferguson added that he could see a world where the current favorite weapon of terrorists -- vehicles -- could be similarly used by cybercriminals, especially with the rise of autonomous cars.

Fleets of autonomous vehicles, all based on a common software framework, could be hijacked to offer a "distributed arsenal" to criminals and terrorists alike, he noted.

"If we don't concentrate and think of real world ramifications that's where we are going," he said. "So we have to have this conversation, as it is no longer about credit card details, more is at risk."

Ferguson’s view was supported by Mikko Hypponen, chief research officer at F-Secure, who said it was clear now that "autonomous robots will become a reality" -- and soon a potential cyber-threat.

"Look at drones," Hypponen noted, "they may not be autonomous, they are probably operated by someone in Nevada to shoot someone in Syria, but the obvious weakness is the link from drone to human, as that link can be disrupted or cut or spied on. Removing that weakness is simple: make the drone smart enough to work without the human."

"It's going to happen and it is scary as hell."

So what can be done to counter this threat? Ferguson noted that the onus was with the security industry itself to make sure that it is prepared for any unexpected threats.

"We as the security industry have a duty to see this coming, and make sure we are building our own toolsets to be able to harness the AI capability, and set it loose on our our systems," he said.

"We can't wait to be the second person to move, because if we do, we are going to be left behind."

This view was backed up by fellow panelist James Lyne, senior security researcher at Sophos, who noted that the industry is struggling to "slaughter the sacred cows of security right now."

"Our future must be about finding better ways to communicate best practice," he noted.

Ferguson supported this view, noting that security awareness and training has been "too shit for too long," leading to companies suffering unduly. He noted that we as humans learn by doing, and seeing the consequences of our actions -- but online, we rarely see the impact of making digital mistakes, so it is now time to make cybersecurity "more human."

Lastly, Hypponen highlighted that perhaps it is time for the security industry to begin looking upon itself mover favourably, noting that, "the fact that we focus on failures makes us forget the fact that we have success stories here."

With media coverage of data breaches and hack attacks reaching new highs, cybersecurity has moved into the spotlight more than ever before, and Hypponen concluded that it’s time that the industry started recognizing its good work too.

"We are making progress, we are seeing improvements, let's focus on the success," he concluded. "Our work is not to secure computers, it's to secure society."

Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.

Image credit: Wavebreakmedia / depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.