Why AI is essential to securing software and data supply chains

Supply-chain vulnerabilities loom large on the cybersecurity landscape, with threats and attacks such as SolarWinds, 3CX, Log4Shell and now XZ Utils underscoring the potentially devastating impact of these security breaches. The latter examples of Open Source Software (OSS) attacks are a growing attack vector. In fact, nearly three-quarters (74 percent) of UK software supply chains have faced cyber attacks within the last twelve months.

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software projects and package managers. Many CISOs and DevSecOps teams are unprepared to implement controls in their existing build systems to mitigate these threats. In 2024, DevSecOps teams will migrate away from shift-left security models in favor of “shifting down” by using AI to automate security out of the developers’ workflows.

Here are the factors fueling the increase in software supply-chain attacks and the role of AI in helping developers work more efficiently while creating more secure code.

The OSS supply chain and attacks

Open source libraries and languages are the foundation of over 90 percent of the world’s software. In a US survey of nearly 300 IT and IT security professionals, 94 percent reported their companies use open source software, and 57 percent employ multiple open source platforms. Exactly half of the respondents say the threat level is “high” or “extreme,” while another 41 percent view it as “moderate.” At the time of writing, details of the backdoor implanted in the XZ library and several other OSS packages have just been published. The ubiquity of open source worldwide is one key factor fueling the rise of supply chain attacks.

The role of data governance and data supply chains 

Security pros must also consider how security vulnerabilities extend to their data supply chains. Although organizations typically integrate externally developed software through their software supply chains, their data supply chains often need clearer mechanisms for understanding or contextualizing data. In contrast to software’s structured systems or functions, data is unstructured or semi-structured and faces a wide array of regulatory standards.

Many companies are building AI or ML systems on top of enormous data pools with heterogeneous sources. ML models on model zoos are published with minimal understanding of the code and content used to produce the models. Software engineers need to handle these models and data just as carefully as they do the code going into the software they’re creating, with attention to its provenance.

DevSecOps teams must assess the liabilities of using data, especially when building LLMs to train AI tools. That demands careful data management within models to prevent the accidental transmission of sensitive data to third parties like OpenAI.

Organizations should adopt strict policies outlining the approved usage of AI-generated code, and when incorporating third-party platforms for AI, conduct a thorough due diligence assessment ensuring that their data will not be used for AI/ML model training and fine-tuning.

The fix: transition from ‘shift-left’ to ‘shift-down’

The industry adopted the shift-left concept a decade ago to address security flaws early in the software development lifecycle and to enhance developer workflows. Defenders of systems have long been at a disadvantage. AI has the potential to level the playing field. As DevSecOps teams navigate the intricacies of data governance, they must also assess the impact of the evolving shift-left paradigm on their organizations’ security postures.

Companies will begin moving beyond shift-left to embrace AI to fully automate security processes and remove them from the developer’s workflow. This is called “shifting-down,” because it pushes security into automated and lower-level functions in the tech stack instead of burdening developers with complicated and often difficult decisions.

GitLab’s Global DevSecOps Report: The State of AI in Software Development found that developers only spend 25 percent of their time on code generation. AI can elevate their output by optimizing the remaining 75 percent of their workload. That’s one way to leverage AI’s capacity to solve specific technical issues and improve the efficiency and productivity of the entire software development life cycle.

Expect to look back on 2024 as the year that the escalating threats adversely affecting OSS ecosystems and global software supply chains catalyzed substantial changes in cybersecurity strategies, including a heightened dependence on AI to safeguard digital infrastructures. The cybersecurity landscape is already transforming, with a growing focus on mitigating supply chain vulnerabilities, enforcing data governance and incorporating AI into security measures. This transformation promises to steer DevSecOps teams toward software development processes with efficiency and security at the forefront.

Josh Lemos is Chief Information Security Officer, GitLab.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.