Developers turn to generative AI despite security risks
According to 800 developer (DevOps) and application security (SecOps) leaders surveyed, 97 percent are using GenAI technology today, with 74 percent saying they feel pressured to use it despite identified security risks.
The research from software supply chain management company Sonatype shows 45 percent of SecOps leads have already implemented generative AI into the software development process, compared to only 31 percent for DevOps.
This may be because SecOps leads see greater time savings than their DevOps counterparts, with 57 percent saying generative AI saves them at least six hours a week compared to only 31 percent of DevOps respondents.
When asked about the most positive impacts of this technology, DevOps leads report faster software development (16 percent) and more secure software (15 percent). SecOps leads cite increased productivity (21 percent) and faster issue identification/resolution (16 percent) as their top benefits.
On the other side of the coin, more than three-quarters of DevOps leads say the use of generative AI will result in more vulnerabilities in open source code. Perhaps surprisingly, SecOps leads are less concerned at 58 percent. Also, 42 percent of DevOps respondents and 40 percent of SecOps leads say lack of regulation could deter developers from contributing to open source projects.
Leads in both fields want to see more regulation. Asked who they believe is responsible for regulating the use of generative AI, 59 percent of DevOps leads and 78 percent of SecOps leads say both the government and individual companies should be responsible for regulation.
"The AI era feels like the early days of open source, like we're building the plane as we're flying it in terms of security, policy and regulation," says Brian Fox, co-founder and CTO of Sonatype. "Adoption has been widespread across the board, and the software development cycle is no exception. While productivity dividends are clear, our data also exposes a concerning, hand-in-hand reality: the security threats posed by this still-nascent technology. With every innovation cycle comes new risk, and it's paramount that developers and application security leaders eye AI adoption with an eye for safety and security."
Licensing and compensation for AI-generated content is a concern too. Notably, rulings against copyright protection for AI generated art have already prompted discussion about how much human input is necessary to meet what current law defines as true authorship. Respondents agree that creators should own the copyright for AI generated output in the absence of copyright law (40 percent), and both groups overwhelmingly agree that developers should be compensated for the code they wrote if it's used in open source artifacts in LLMs.
You can find out more on the Sonatype site.
Image Credit: Wayne Williams