London calling: Hey, US, let's chat about cyber AI, the next WannaCry

Artificial-Intelligence-threat

In 2017, WannaCry caused significant disruption to the UK public and private sector. The disruption highlighted vulnerabilities within corporate and government systems, most notably within the UK. It impacted hospitals, healthcare facilities and social care, causing operations and admissions to be cancelled, delayed, or postponed.

The attack exposed a lack of robust cyber security measures, failings in basic IT administration and emphasized the importance of investing in strong defenses to safeguard critical public infrastructure. It prompted a renewed focus on cyber security within the UK and initiated efforts to enhance resilience against future cyber threats.

Globally, the estimated cost of recovering from the impact of the WannaCry attack is between $4 billion and $8 billion.

Not long after the WannaCry attack, businesses and governments around the world were impacted by a similar, more devastating, attack known as NotPetya. This attack used some of the same exploits to spread between devices and encrypt the data on, and attached to, the devices. This automated propagation method spread further than its initial intended target and was the first malware that was reported as costing in excess of $10 billion.

The costs and impacts of automated attacks are therefore significant, particularly when they exceed the intended bounds and targets that their creators intended.

What would the next generation of cyber-attacks look like?

Artificial intelligence (AI)-based attacks would likely possess greater adaptability and evasion capabilities. They could continuously learn from their environment, dynamically adjust their attack vectors, and employ advanced obfuscation techniques to bypass security systems. They might also have the ability to analyse defensive measures and find weaknesses in real time, making them highly challenging to detect and mitigate.

AI-based attacks could have even more profound global impacts than either WannaCry or NotPetya due to their increased sophistication and adaptability. They could target multiple critical sectors simultaneously, causing cascading failures and disrupting essential services on a larger scale.

This may sound fanciful or futuristic, particularly because AI, in the form of ChatGPT, has only just been released and is only answering textual questions or requests. However, AI has been around for a very long time and has fallen in and out of vogue by showing significant promise and then struggling to deliver. As with many things, it’s very often the lesser-known and longer-term developments that hold the greatest promise.

The Cyber Grand Challenge

In 2016, a full year before the NotPetya attack, an event took place as part of the famed Defcon cyber security conference in Las Vegas, Nevada. The event was the finale to a competition initiated in 2013 by the US Defense Advanced Research Projects Agency (DARPA). The Cyber Grand Challenge, as it was known, carried a $2 million prize for first place and featured no human competitors. Instead, the ten finalists were glowing seven-foot-tall racks of machines that represented the culmination of three years of research and development by different teams made up of some of the most brilliant (human) minds in AI.

Security for the event was tight, with only authorized referees being allowed into the arena. At the start of the competition, a network cable was symbolically cut, isolating the competitors from their creators and the outside world.

For the competition itself, DARPA had created an entirely new operating system and a number of applications and services to run on it. These systems had all been created by humans but contained subtle flaws and vulnerabilities that were similar to vulnerabilities that had previously been identified in a variety of legitimate production systems. Nevertheless, the creation of an entirely new operating system and applications meant that all competitors faced a zero-knowledge challenge against which they would be graded. 

That grading was based on three metrics:

Defense: Competitors needed to defend against attacks from other competitors. To do this, competitors dynamically inserted or appended code to prevent others from exploiting vulnerabilities that they discovered.

Functionality: Competitors lost points if their patches impacted functionality, degraded performance, or took systems offline.

Attack: Competitors had to identify vulnerabilities in other competitors’ systems, configuration, and code and then create and successfully exploit it.

To help observers of the competition, ‘pew pew’ maps and human commentators were on hand to narrate the action and activities for each competitor and how it fitted in with the overall scoring.  The atmosphere, at least amongst observers and the competitors’ teams, was tense and fraught with drama, especially when, approximately halfway into the multi-round event, ‘Mayhem’, a competitor who had steadily built up a significant lead over its adversaries, stopped working. The development team behind Mayhem asked to reboot the competitor. However, the organizers declined their request.

At the time that Mayhem stopped working, it had built up approximately a 10,000-point lead, whereas a points comparison between third and fourth place, the separating places for those who won prize money, was less than 1,000 points. 

Not all was lost for Mayhem. While the system stopped attacking and defending it still earned points for functionality and keeping its systems online. The impact was that other competitors reduced Mayhem’s lead, but at a slower rate than may have been expected.

It is unclear why Mayhem stopped working or why it stated working again in the final stages of the competition. What is known is that when all was said and done, and the competition was over, Mayhem still came out on top.

The lead that Mayhem had built up in the early stages of the competition had been sufficient to carry it through. Mayhem had built the lead, not by being significantly better at attacking or defending, but due to the strategy engine that it employed in doing so. Mayhem did not take its systems offline in order to patch them when it found a vulnerability. Instead, it only took the system offline and remediated the vulnerability when one of its competitors discovered the flaw and tried to exploit it. While many competitors lost points by taking systems offline to patch them, Mayhem gained points by keeping systems online, performant, and functional.

When it came to attack, Mayhem dominated the field by creating distractions to make other competitors think that it had found a vulnerability in one aspect of the system when in reality it hadn’t, and was exploiting, more stealthily, another aspect. This distraction technique was so successful that Mayhem even tricked the human commentators into believing, and announcing, that Mayhem had found a vulnerability when there wasn’t one.

While this story of the Cyber Grand Challenge, and Mayhem, is a compelling one in itself, it is worth remembering that the event, and challenge as a whole, was created by DARPA. Perhaps unsurprisingly, within the audience were a number of top officials from the US military’s different cyber commands. At least one of those are openly on record as saying, “I want one of those”.

Cyber Command's New Approach

In 2018, two years after the Cyber Grand Challenge, and one year after the WannaCry and NotPetya attacks, Plan-X, a US army project to automate cyber operations, amalgamated with similar projects from the US Airforce and an obscure and secretive department of the Pentagon, called the Strategic Capabilities Office, to create Project Ike. The collaboration between the different entities of US military and the Pentagon is referred to as the Joint Cyber Command and Control (JCC2). Beyond year-on-year funding increases, very little else is known about the progress of Project Ike, except that updates by its developers are pushed out every three weeks.

In August 2020 Paul Nakasone (Commander of U.S. Cyber Command, Director of the National Security Agency, and Chief of the Central Security Service) and Michael Sulmeyer (Senior Advisor to the Commander of U.S. Cyber Command) wrote a paper called ‘Cyber Command’s New Approach’, in which they say: 

"It is not hard to imagine an AI-powered worm that could disrupt not just personal computers but mobile devices, industrial machinery, and more."

The shape of things to come

While many will point out that the US and UK are allies and carry out joint operations across a number of adversarial frontiers; and that it is unlikely that they will directly target, attack or otherwise impact UK interests, it’s worth remembering that the enabling element of both WannaCry and NotPetya were originally tools developed, and lost or stolen from, US intelligence agencies.

In July 2023, Lindsay Cameron, CEO of the UK’s National Cyber Security Centre (NCSC) addressed an audience on the subject of AI and machine learning (ML). In a social media post about the event, the NCSC summarised the speech, saying: "These technologies will shape our future and the UK. But our adversaries seek to exploit AI for their own ends."

Meanwhile, a month earlier, in June 2023, Jen Easterly, Director of the US Cybersecurity and Infrastructure Security Agency (CISA) noted that “China's (one of the biggest investors, developers and leaders of AI) cyber-ops have shifted from espionage activities to targeting infrastructure and societal disruption” while adding an ominous context and comparison to the weaponization of AI:

If we can have conversations with our adversaries about nuclear weapons, I think we probably should think about having these conversations with our adversaries on AI, which after all in my view will be the most powerful weapons of this century.

While the idea that AI being the sole purview of large multinational corporations and nation-state actors may be comforting to some, it is worth bearing in mind that AI and ML performance accelerators are widely available, ranging in price from around 30 to many thousands of dollars, and that at least one of the competitors from the Cyber Grand Challenge vowed to always keep their competitor, and future developments, open source.

Image Credit: Wayne Williams

Mark Cunningham-Dickie is Senior Incident Responder, Quorum Cyber.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.