The rise of voice -- is typing holding developers back? [Q&A]


Voice is in the process of replacing the keyboard in ways that could shake the foundations of how developers work, collaborate, and even think.
We spoke with Natalie Rutgers, VP of product at Deepgram, to find out how voice-first development tools are gaining traction and why companies that fail to adapt risk losing their edge -- and their top talent.
Software engineers love building new features but are spending more time on other things


A new survey of 1,200 software engineers and technology leaders finds that only 33 percent of engineers strongly agree that they spend the majority of their time on work that energizes them. They spend just 16 percent of their week building features, despite 93 percent saying it’s the most rewarding part of their jobs
The study from Chainguard looks at how friction from repetitive maintenance, fragmented tools, and burnout continues to weigh heavily on the developer experience, while also revealing how AI and automation ease workloads to give software engineers more time for meaningful work.
Google builds new AI agent to improve code security


Google has released information on a new AI-powered agent that automatically improves code security by fixing critical software vulnerabilities.
CodeMender has been built over the past six months and the company has already upstreamed 72 security fixes to open source projects, including some as large as 4.5 million lines of code.
AI is an even playing field -- how secure by design can tip the scale [Q&A]


Vibe coding is currently all the rage, with more than 97 percent of respondents to a survey earlier this year reporting having used AI coding tools at work.
The adoption of these tools only continues to grow but it comes with a catch, attackers are also employing the same techniques. We spoke to Pieter Danhieux, co-founder and CEO of Secure Code Warrior, to discuss how vibe coding is redefining the software development landscape, how malicious actors are also leveraging this technology and the need for organizations to implement secure by design strategies from the outset.
Use of AI-powered software testing doubles in the last year


A new report shows that 60 percent of organizations use AI in the software testing process, compared to just 30 percent last year, but 80 percent lack in-house AI testing expertise.
The study from Applause, based on a global survey of more than 2,100 software development and testing professionals, finds 92 percent of organizations are finding it challenging to keep pace with rapidly changing requirements.
New AI-powered code intelligence platform speeds up modernization efforts


Today’s enterprises are often stuck with legacy code that hampers attempts at modernization, maintenance and more.
To address this CoreStory is launching an AI-driven code intelligence platform that uncovers the fundamental insights in code to accelerate software modernization efforts quickly, efficiently and with greater confidence.
Software engineers need new skills in the age of AI


AI is transforming software engineering, changing what software engineers do and the skills they need to succeed. A new survey from Uplevel, of over 100 senior engineering leaders at mid-to-large technology companies, looks at what they believe will be the most important skills for their teams.
It finds that validation of AI outputs and quality assurance (QA) is valued highest, cited by 66 percent of leaders, followed by performance monitoring and optimization (39 percent), and system architecture and integration skills (34 percent)
The challenge of syncing virtual economies across platforms [Q&A]


As people increasingly demand seamless experiences across platforms, developers face growing complexity in managing unified virtual economies.
In the games sector in particular companies need to leverage distributed architecture, virtual currencies, and real-time inventory reconciliation to ensure consistent user experiences and secure transaction flows.
Rethinking AppSec for the AI era [Q&A]


The application security landscape has always been a complex one and can lead to teams spending too much time hunting down vulnerabilities. With AI becoming more popular there are even greater risks to consider.
We spoke to Yossi Pik, co-founder and CTO at Backslash Security, to discuss how AppSec needs to adapt to the greater use of AI.
More than half of developers think AI codes better than humans


A survey of 800 senior developers has 75 percent of respondents saying they expect AI to significantly transform the industry within the next five years. What’s more 53 percent say they believe large language models can already code better than most humans.
The survey, from Clutch, reveals that AI has already become a daily tool for many software teams. 49 percent of senior developers and team leads say they use AI tools every day. Another 29 percent use them most days, meaning 78 percent rely on AI regularly.
Insecure code is behind a wave of data breaches


New research reveals that insecure code is behind a shocking number of cyber breaches in the UK, with two-thirds of tech leaders admitting their organization suffered an incident in the past year.
The study from SecureFlag, of 100 UK C-suite and tech leaders, shows that despite the risks, many companies are still failing to train developers properly, leaving a gap that attackers are exploiting.
Over 80 percent of organizations knowingly ship vulnerable code


New research shows 81 percent of organizations knowingly ship vulnerable code, and 98 percent experienced a breach stemming from vulnerable code in the past year, that’s a sharp rise from 91 percent in 2024.
The survey from Checkmarx, of more than 1,500 CISOs, AppSec managers and developers around the world, also shows that AI‑generated code is becoming mainstream, but governance is lagging.
Popular LLMs share strengths and weaknesses when it comes to creating code


Increasing pressure to build and launch applications quickly has seen a rise in the use of AI to generate code. New analysis from Sonar, looking at the quality and security of software code produced by top Large Language Models (LLMs), finds significant strengths as well as material challenges across the tested models.
The study used a proprietary analysis framework for assessing LLM-generated code, tasking the LLMs with over 4,400 Java programming assignments. The LLMs evaluated in the study include Anthropic's Claude Sonnet 4 and 3.7, OpenAI's GPT-4o, Meta's Llama-3.2-vision:90b, and OpenCoder-8B.
Organizations embrace AI but lack proper governance over development


According to new research 93 percent of firms in the UK today use AI in some capacity, but most lack the frameworks to manage its risks and don’t integrate AI governance into their software development processes.
The study from Trustmarque shows only seven percent have fully embedded governance frameworks to manage AI risks. In addition a mere four percent consider their technology infrastructure fully AI-ready, and just eight percent have integrated AI governance into their software development lifecycle.
93 percent of software execs plan to introduce custom AI agents


New research from OutSystems shows an increasing trend in agentic AI prioritization among software executives with 93 percent of organizations already developing -- or planning to develop -- their own custom AI agents.
IT leaders are under pressure to deliver measurable business value while managing constrained resources and aligning technology investments with long-term strategic goals. Introducing agentic AI helps address these demands by tackling challenges like fragmented tools, and limited ability to leverage data siloed across the organization.
Recent Headlines
Most Commented Stories
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.