70 percent of DevSecOps professionals can't identify AI source code origins
Almost 70 percent of DevSecOps professionals can't detect AI source code origins, creating massive security risks, according to a new report.
The study from JFrog finds the majority of software developers and cybersecurity teams are lacking well-defined AI and Machine Learning (ML) source code usage visibility, provenance, and governance, leaving many organizations at risk.
Few reliable methods exist for companies to determine if source code comes from humans, large language models (LLMs), or Generative AI (GenAI). Consequently, 68 percent of organizations say they cannot trace the origin of code from these tools, and 59 percent still depend on manual processes for enforcing 'training data' policies.
"AI has had a profound impact on DevSecOps, but the reality is we're still scratching the surface on what many of these platforms and software components can do amidst a rapid adoption cycle," says Moran Ashkenazi, SVP and CISO at JFrog. "This survey makes it clear that AI and ML development still largely operates in a silo, creating challenges in terms of visibility and security. DevSecOps leaders need to treat AI and ML models like software packages, meaning they should be managed with the same level of security, visibility, and control currently applied to the rest of their software supply chain."
Reports of attacks on development languages, infrastructure, manipulation of AI engines to expose sensitive data, and threats to overall software integrity are increasingly prevalent. Additionally, government regulations for AI such as the White House Executive Order and EU AI Act, signal a new era of accountability for organizations looking to leverage AI for business needs and to increase competitive edge.
A majority of firms (79 percent) believe security concerns are slowing the use of AI/ML technology by organizations and/or the integration of AI/ML features into software made by the organization.
In addition nearly two-thirds (64 percent) of organizations lack full confidence in their ability to meet emerging AI regulatory standards in software development.
The full report is available from the JFrog site.
Image credit: Jirapong Manastrong/Dreamstime.com