Google builds new AI agent to improve code security


Google has released information on a new AI-powered agent that automatically improves code security by fixing critical software vulnerabilities.
CodeMender has been built over the past six months and the company has already upstreamed 72 security fixes to open source projects, including some as large as 4.5 million lines of code.
Software vulnerabilities are notoriously difficult and time-consuming for developers to find and fix. As breakthroughs in AI-powered vulnerability discovery increase it will become increasingly difficult for humans alone to keep up.
CodeMender takes a comprehensive approach to code security that’s both reactive, instantly patching new vulnerabilities, and proactive, rewriting and securing existing code and eliminating entire classes of vulnerabilities in the process. By automatically creating and applying high-quality security patches, CodeMender’s AI-powered agent helps developers and maintainers focus on building good software.
It makes use of recent Gemini Deep Think models to produce an autonomous agent capable of debugging and fixing complex vulnerabilities. Robust tools let it reason about code before making changes, and automatically validate those changes to make sure they’re correct and don’t cause regressions.
Despite the success of testing Google is taking a cautious approach and focusing on reliability. Currently, all patches generated by CodeMender are reviewed by human researchers before they’re submitted upstream. It’s working with the open source community to help developers keep their codebases secure.
You can read more about CodeMender on the Google developer blog.
Google is also announcing updates to its vulnerability rewards program (VRP) and Secure AI Framework (SAIF).
VRPs have already paid out over $430,000 for AI-related issues alone, but to sharpen this collaboration, Google is launching a dedicated AI VRP. This new program simplifies the reporting process by unifying abuse and traditional security issues into a single, comprehensive reward table, ensuring clear scope and maximum incentive for finding high-impact flaws.
At the same time it’s expanding its SAIF framework to focus on AI agents and launch an AI agents Agent Risk Map that illustrates how agentic risks integrate into a comprehensive, full-stack view of GenAI risks for security practitioners.
Image credit: BiancoBlue/depositphotos.com