Rethinking AppSec for the AI era [Q&A]

The application security landscape has always been a complex one and can lead to teams spending too much time hunting down vulnerabilities. With AI becoming more popular there are even greater risks to consider.

We spoke to Yossi Pik, co-founder and CTO at Backslash Security, to discuss how AppSec needs to adapt to the greater use of AI.

BN: As AI -- and LLMs, in particular -- become more common in code development, what key security risks and opportunities do you see emerging, and how should security teams adapt?

YP: AI is speeding up software development -- but it’s scaling insecure code just as quickly. In fact, our research found that top LLMs, by default, generate code with vulnerabilities.

Blocking developers from using GenAI tools for coding is neither productive nor realistic; instead, we need security that’s contextual, real-time, and embedded into the development process -- in fact, as transparent to developers as possible. With the right guardrails, GenAI isn’t a liability -- it’s an opportunity to scale secure development. Application security tools must evolve to understand code logic, highlight what truly matters, and guide developers at the point of creation.

BN: The surge in ‘vibe coding’ is changing how software is written. How does application security need to evolve to match this shift?

YP: Vibe coding -- fast, intuitive, and AI-driven -- is fundamentally changing the pace and nature of software creation, and AppSec has to keep up. Instead of blocking the flow, AppSec should plug into it, embedding real-time guidance directly into developers’ tools.

The first challenge is visibility. Today, security teams don’t even know whether developers are using vibe coding tools, and where. IDEs have largely been outside the “perimeter” of security teams, other than perhaps used to provide feedback to developers on vulnerabilities -- but this new trend requires better security governance of the entire environment.

This is where ‘vibe-securing’ comes in: seeing where vibe coding is used, ensuring that there are embedded prompt rules guiding the LLM to produce secure code -- and finally, integrating real-time, conversational security feedback that helps developers fix issues as they code using Model Context Protocol (MCP) servers for further empowerment of developers.

BN: The concept of ‘digital twins’ is gaining traction across many areas of cybersecurity. How is this innovation transforming AppSec, and what role can AI play in maximizing its effectiveness?

YP: Digital twins of the application give security teams a complete, contextual view of how code behaves -- combining custom and open-source components in one model. Unlike traditional scanners, they simulate changes and assess real-world impact without touching production environments. That makes it faster and safer to identify what’s truly exploitable in potential real world execution without actually waiting for it to happen. At Backslash, we call this ‘triggerability,’ referring to whether an application’s use of vulnerable components can be exploited in a given environment.

AI-powered twins, like Backslash’s App Graph, take this even further, surfacing only the risks that matter with near-runtime accuracy. It’s the precision and speed AppSec needs to keep up with modern code development.

BN: What are your thoughts on emerging AppSec strategies like ASPM (Application Security Posture Management) -- especially considering the new risks introduced by AI-generated and open-source code?

YP: With code being written faster than ever, security teams can’t afford to chase every vulnerability -- they need to focus on the ones that count. Alert overload has been a pervasive issue in AppSec for some time. ASPM aims to solve this overload by adding yet another layer of aggregation of multiple scan results, and adding context to them after the fact. But they don’t address the root cause -- which is that old approaches are yielding poor results in the first place because most were created at least a decade ago.
In today's coding climate, CISOs need to go beyond solving second-order issues and change the paradigm. A holistic application security approach examines the entire codebase, including open source, proprietary and AI-generated code, to determine a high quality list of truly critical vulnerabilities that can be exploited in specific, real world environments.

BN: In an increasingly AI-driven world, what other emerging trends do you anticipate will shape the future of AppSec?

YP: Security tools powered by AI hold promise, but they’re still playing catch-up with the rapid rise of AI in software development. Just like with open source and CI/CD, security is once again struggling to keep pace. GenAI will likely widen that gap -- at least in the short term. But AI also has the potential to give AppSec teams more control from the start -- which opens the door for smarter, more adaptive security models that work with developers, not as an afterthought, nor as a concept of ‘coding AI vs. security AI’ battling it out. The goal now is to build application security programs that are fully integrated into the coding process, fast, contextual, and ready to scale at the speed of code.

Image credit: BiancoBlue/depositphotos.com

BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.

Regional iGaming Content

© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.