Workforces need the skills to defend against AI-enabled threats

Artificial-Intelligence-threat

It’s no secret that artificial intelligence (AI) is transforming software development. From automating routine tasks to enhancing code efficiency and optimizing testing processes, AI is helping developers save time, money, and resources. It can also analyze code to detect bugs, security vulnerabilities, and quality issues more effectively than traditional models. If you’re thinking there’s a "but" coming, you’re right.

The downside to the benefits of leveraging AI technologies in software development is that it can also enhance the capabilities of malware developers. As such, the proliferation of AI is not necessarily fueling new cyberattacks, it is simply creating an even distribution of enhanced proficiency for both legitimate and malicious actors.

As organizations seek to bolster and expand cybersecurity efforts, they need employees who know how to utilize AI as a defense against malware and other threats.

To begin with, there’s nothing mysterious when it comes to how malware developers harness AI -- they use it the same way any developer would. Creating malware is the same as creating software, and those who build it can, like any developer, take advantage of GitHub Copilot and other generative AI (GenAI) tools to accelerate the process.

While learning materials may be scant on techniques for windows API access, evading endpoint detection response, or creating command and control capabilities, bad actors are resourceful. Malware creators can identify ways to accomplish development tasks using AI that are not specific to malware and then combine techniques to create new capabilities.

These steps are not the same as simply asking logic-learning machines to create malware. At the human level, someone must understand the components, what they are trying to accomplish, and what is and is not possible. But AI accelerates their ability to go from ideation to implementation.

It’s Time to Think Beyond Governance, Risk Management, and Compliance

There is no magic bullet in the battle of “good AI” versus “bad AI.” Workforces remain under-skilled and shortcomings in cross-enterprise cyber education linger, leaving companies vulnerable to opportunistic attackers. Organizations on the defensive side of cyber security can’t just point some form of AI tech at their network and say, “Defend this line.”

Without knowing the right questions -- What are the potential threats? And what form might they take? -- stakeholders can only lean into AI as an assistive tool to help prepare for a cyberattack. In defensive terms, AI will help save time but its benefit is dependent upon the skill level of the defender using it. While this is also true for malicious actors, their attack profile is more straightforward, especially as it relates to malware development and the creation of social engineering campaigns and collateral.

To protect against threats and avoid common pitfalls when upskilling employees in cybersecurity, organizations must think beyond governance, risk management, and compliance. These passive tools and processes don’t actively protect against threats. Instead, businesses must focus on active security operations, assessments, and the implementation as well as validation of control measures. Smaller businesses can also benefit by incorporating offensive testing to actively seek out vulnerabilities susceptible to exploitation by attackers.

GenAI is increasing the ability of cyber threat actors to leverage social engineering in more effective ways, removing language barriers. In addition to this, the sophistication of tactics used by commodity ransomware actors is increasing, especially regarding malware. This increases the efficacy of initial access, leaving the rest of the cyber kill chain as points where attackers can be detected and stopped. Defenders need to be trained on how to identify these post-initial access attacks, understand how they work, and how they are being used as the common playbook in many recent breaches.

The tangible reduction of risk occurs when vulnerabilities or misconfigurations are identified and rectified. Once this occurs, it’s also crucial to validate every step that was taken to fix the problem.

As both sides do battle, it will be inherently more complicated for defenders to leverage AI than attackers. Despite available solutions to guard against AI-based leaks of proprietary information, many companies have -- for this reason -- shied away from AI adoption altogether. Furthermore, since bad actors don’t work within corporate confines, they’re unimpeded by constraints such as data privacy requirements, the speed of organizational adoption, and risk management strategies.

In the absence of restrictions, attackers can adopt AI capabilities more rapidly than their potential targets.

The Skills Needed for a Successful Defense

In real-world terms, there were two incidents in the past year that highlight how an upskilled workforce can be the difference between victory and defeat.

In the first incident, the company had an entire team dedicated to governance and compliance but virtually no hands-on security operations. This vulnerability was compounded by the fact that the skills of the company’s administrators were outpaced by technology. In the wake of the attack, the entire organization was forced to shut down for weeks, conduct its business on paper, and hire external help in order to rebuild from scratch.

In the second attack, the organization had built out its security operations center (SOC) team year over year and continually provided training to upskill its members. They also conducted weekly red team exercises to test the detection of new cyberattack techniques. When the day came that the company was compromised by a zero-day attack, the SOC was ready.

The team detected the breach within 12 hours, identified every area where the attack had spread, and ejected a potentially multi-million dollar threat.

This company had all the tools needed to defend itself but the success of the end result would have been impossible without the skilled team members who tracked down the attacker’s activities and had the know-how to lock down the system. Without establishing fixed vulnerabilities at every point in the attack chain, they would have been hung out to dry.

Organizations that are serious about cybersecurity need workforces that emulate this example.

Image Credit: Wayne Williams

Aaron M. Rosenmund is Senior Director of Content Strategy & Curriculum at Pluralsight. He is a cyber security operations subject matter expert, with a background in federal and business defensive and offensive cyber operations and system automation. Leveraging his administration and automation experience, Aaron actively contributes to multiple open and closed source security operation platform projects and continues to create tools and content to benefit the community. As an educator & cyber security researcher at Pluralsight, he is focused on advancing cyber security workforce and technologies for business and national enterprises alike. In support of the Air National Guard, he contributes those skills part time in various initiatives to defend the nation in cyberspace.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.