83 percent of organizations use AI to generate code despite concerns

A survey of 800 security decision-makers across the US, UK, Germany and France reveals that 92 percent of security leaders have concerns about the use of AI-generated code within their organization.

In spite of these concerns though the study from Venafi finds 83 percent of organizations use AI for coding and open source software is present in 61 percent of applications.

The report finds 72 percent feel they have no choice but to allow developers to use AI to remain competitive, and 63 percent have considered banning the use of AI in coding due to the security risks.

There are particular worries around security, 66 percent of survey respondents say it is impossible for security teams to keep up with AI-powered developers. As a result, security leaders feel like they are losing control and that businesses are being put at risk, with 78 percent believing AI-developed code will lead to a security reckoning and 59 percent losing sleep over the security implications of AI.

In addition 63 percent of security leaders think it is impossible to govern the safe use of AI in their organization, as they don't have visibility into where AI is being used. Despite concerns, less than half of companies (47 percent) have policies in place to ensure the safe use of AI within development environments.

"Security teams are stuck between a rock and a hard place in a new world where AI writes code. Developers are already supercharged by AI and won't give up their superpowers. And attackers are infiltrating our ranks -- recent examples of long-term meddling in open source projects and North Korean infiltration of IT are just the tip of the iceberg," says Kevin Bocek, chief innovation officer at Venafi. "Anyone today with an LLM can write code, opening an entirely new front. It’s the code that matters, whether it is your developers hyper-coding with AI, infiltrating foreign agents or someone in finance getting code from an LLM trained on who knows what. So it's the code that matters! We have to authenticate code from wherever it comes from."

Open source presents concerns too, on average, security leaders estimate 61 percent of their applications use open source -- although GitHub puts this as high as 97 percent. This could present risks, given that 86 percent of respondents believe open source code encourages speed rather than security best practice amongst developers.

Yet trust levels are high, 90 percent of security leaders say they trust code in open source libraries, with 43 percent saying they have complete trust -- yet 75 percent say it is impossible to verify the security of every line of open source code. As a result, 92 percent of security leaders believe code signing should be used to ensure open source code can be trusted.

"The recent CrowdStrike outage shows the impact of how fast code goes from developer to worldwide meltdown," Bocek adds. "Code now can come from anywhere, including AI and foreign agents. There is only going to be more sources of code, not fewer. Authenticating code, applications and workloads based on its identity to ensure that it has not changed and is approved for use is our best shot today and tomorrow. We need to use the CrowdStrike outage as the perfect example of future challenges, not a passing one-off."

The full report is available from the Venafi site.

Image credit: meshcube/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.