Popular LLMs produce insecure code by default


A new study from Backslash Security looks at seven current versions of OpenAI's GPT, Anthropic's Claude and Google's Gemini to test the influence varying prompting techniques have on their ability to produce secure code.
Three tiers of prompting techniques, ranging from 'naive' to 'comprehensive,' were used to generate code for everyday use cases. Code output was measured by its resilience against 10 Common Weakness Enumeration (CWE) use cases. The results show that although secure code output success rises with prompt sophistication all LLMs generally produced insecure code by default.
Get 'Unlocking the Secrets of Prompt Engineering' (worth $35.99) for FREE


Unlocking the Secrets of Prompt Engineering propels you into the world of large language models (LLMs), empowering you to create and apply prompts effectively for diverse applications, from revolutionizing content creation and chatbots to coding assistance.
Starting with the fundamentals of prompt engineering, this guide provides a solid foundation in LLM prompts, their components, and applications. Through practical examples and use cases, you'll discover how LLMs can be used for generating product descriptions, personalized emails, social media posts, and even creative writing projects like fiction and poetry.