AI is an even playing field -- how secure by design can tip the scale [Q&A]

Secure by design

Vibe coding is currently all the rage, with more than 97 percent of respondents to a survey earlier this year reporting having used AI coding tools at work.

The adoption of these tools only continues to grow but it comes with a catch, attackers are also employing the same techniques. We spoke to Pieter Danhieux, co-founder and CEO of Secure Code Warrior, to discuss how vibe coding is redefining the software development landscape, how malicious actors are also leveraging this technology and the need for organizations to implement secure by design strategies from the outset.

BN: Vibe coding is very popular. Can you explain how this approach is redefining the software development landscape and enterprise cybersecurity strategies?

PD: Vibe coding, agentic AI coding and other AI-powered software development models are here to stay, and they have already greatly impacted the way many developers approach their jobs. Vibe coding specifically enables both developers, and non-developers, to guide software development through prompts, using agentic AI coding tools to significantly accelerate code creation. Basically, you’re trusting the AI to develop software on auto-pilot. From the developer perspective, this technology does have the ability to offer several advantages, including increased productivity, rapid development and the ability to explore and innovate. However, in order to reap the benefits and actually leverage the productivity gain, security must be taken seriously and adequately prepared for with each developer and the enterprise security program as a whole.

BN: As excitement continues to grow around the potential benefits of AI-assisted coding technology, many of the flaws fly under the radar. Can you talk about your concerns with this technology?

As mentioned above, this shift in technology can have its benefits, but the key is that it should be utilized by a security-proficient developer. It is unwise to ignore the blatant risks and challenges associated with AI-assisted coding technology being leveraged by developers that do not have the appropriate level of expertise, which turns out, is a larger sum than you might think. Almost 90 percent of developers have reported their own struggles in practicing secure coding. So, while pairing skilled developers with AI tooling can boost productivity and open areas of opportunity, most developers lack this necessary level of expertise. As a result, we need to be cautious with AI, as it can assist in the acceleration, creation and deployment of insecure code and increase the risk of hidden bugs, security vulnerabilities and technical debt if handled poorly.

BN: Can you point to a specific scenario where these tools have caused more harm than good? What could have prevented this?

The issue is, vibe coding is still very new, it’s just that it has experienced relatively fast, widespread adoption. In the very near future, we are likely to see the result of this swift uptake, with vibe-coded apps and sites hitting the mainstream market. That is the point where we will get a sense of the cyber risk involved, and how damaging potential attacks might be.

However, data from BaxBench has revealed that no current AI tool can consistently produce secure code to the level required for safe enterprise deployment. Further, vibe coding startup Lovable recently made headlines for failing to address a critical authentication vulnerability in its software -- affecting 70 vibe-coded applications so far -- that allows threat actors to access users’ PII, including names, email addresses, payment information and secret API keys that could allow the attacker to run up charges that would be billed to the compromised customers.

The only way we can prevent an onslaught of cyberattacks affecting vibe-coded applications is to ensure that only security-proficient developers are sending them to production. These tools are fun, and it’s exciting for non-developers to experiment with them, but coding an application or site that contains sensitive data capture, payment gateways and other complex features without the skills to ensure they are created with robust security parameters is a recipe for disaster.

BN: It’s important to remember that the technology and processes being used to improve defenses are almost always being utilized by the bad guys, too. Are AI-assisted tools helping malicious actors expand their attack surface?

Absolutely. Malicious actors are using AI in many of the same ways that defenders are, but with a very different goal in mind. AI has helped threat actors with automation, speed, innovation and adaptation, to name a few. Additionally, when insecure code is produced with vulnerabilities, you are essentially rolling out the welcome mat for attackers. Hackers can and will take advantage of mistakes such as unchecked inputs and weak authentication, and there are several ways vulnerable code can be exploited, further underscoring the need for strong security guardrails.

BN: How can organizations best protect themselves from evolving threats in a landscape that is constantly shifting and how can secure by design help defenders stay ahead of adversaries?

Secure by design (SbD) is what all organizations should be looking to achieve. Security must be implemented from the very start of the software development lifecycle (SDLC) and the collaboration between security leaders and developers is paramount. It is not in our best interest to attempt to apply security when applications are already in production, as it puts us significantly behind. By prioritizing security at the beginning of the SDLC, we make it more difficult for hackers to gain an edge and tip the scale in our favor.

What makes SbD such an effective strategy is the commitment to the education of the developers. By establishing programs that focus on improving the knowledge, expertise, and skills of developers, we’re arming ourselves with arguably one of the best lines of defense: adept professionals that are highly capable of working with security teams to reduce the risk of flaws in software. By ensuring that we have developers who can create secure code from the start, and who can identify if code generated by AI is secure from the start, we’re making significant strides as an industry.

Image credit: serezniy/depositphotos.com

Why Trust Us



At BetaNews.com, we don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.

© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.