AI emerges as a cybersecurity teammate


On its own artificial intelligence isn’t a solution to cybersecurity issues, but new data from Hack The Box, a platform for building attack-ready teams and organizations, reveals that cybersecurity teams are increasingly beginning to adopt AI as a copilot for solving security challenges.
Based on real-world performance data from over 4,000 global participants in Hack The Box’s Global Cyber Skills Benchmark, a large-scale capture the flag competition, the report highlights how cyber teams are starting to use AI as a teammate to their security staff.
Just six percent of CISOs have AI protection in place


While 79 percent of organizations are already using AI in production environments, only six percent have implemented a comprehensive, AI-native security strategy.
This is among the findings in the new AI Security Benchmark Report from SandboxAQ, based on a survey of more than 100 senior security leaders across the US and EU, which looks at concerns about the risks AI introduces, from model manipulation and data leakage to adversarial attacks and the misuse of non-human identities.
Consumers are putting more trust in AI searches


A new survey of over 2,000 consumers across the US, UK, France and Germany looks at how people are adopting, and trusting, AI tools to discover, evaluate, and choose brands.
The study from Yext finds that 62 percent of consumers now trust AI to guide their brand decisions, putting it on par with traditional search methods used during key decision moments. However, 57 percent still prefer traditional search engines when researching personal, medical or financial topics.
New AI-driven features set to help security remediation efforts


Security teams today are overwhelmed by fragmented data, inconsistent tagging, and the manual burden of translating findings into fixes.
A new release of the Seemplicity platform introduces an AI Insights feature along with Detailed Remediation Steps, and Smart Tagging and Scoping, three new capabilities that use AI to solve some of the most painful and time-consuming cybersecurity tasks.
Organizations embrace AI but lack proper governance over development


According to new research 93 percent of firms in the UK today use AI in some capacity, but most lack the frameworks to manage its risks and don’t integrate AI governance into their software development processes.
The study from Trustmarque shows only seven percent have fully embedded governance frameworks to manage AI risks. In addition a mere four percent consider their technology infrastructure fully AI-ready, and just eight percent have integrated AI governance into their software development lifecycle.
The impact of AI -- how to maximize value and minimize risk [Q&A]


Tech stacks and software landscapes are becoming ever more complex and are only made more so by the arrival of AI.
We spoke to David Gardiner, executive vice president and general manager at Tricentis, to discuss to discuss how AI is changing roles in development and testing as well as how companies can maximize the value of AI while mitigating the many risks.
Application layer comes under threat


A new report from Contrast Security exposes a growing crisis at the application layer as adversaries use AI to easily launch previously sophisticated attacks at scale.
Recent reports from Verizon (DBIR 2025) and Google Mandiant (M-Trends 2025) confirm what many security leaders already suspect: components of the application layer are among the most targeted and least protected part of the modern enterprise.
Financial firms keen to use AI but their data isn't ready


A new study into AI readiness shows that while financial services firms are ready to adopt AI, they still have work to do in terms of improving data quality and modernizing systems.
The study from Indicum finds many financial services firms are hindered by legacy data systems and outdated IT infrastructure, which often lack the real-time processing and data quality capabilities required for effective AI deployment.
Differing levels of access to AI create new inequalities


A new survey of 4,000 knowledge workers across the UK, US, Germany, and Canada reveals that higher earners have disproportionate access to the latest AI tools and training, allowing them to reap AI's promised rewards.
In contrast, the study from The Adaptavist Group reveals that lower earners and women are being shut out from AI opportunities, which impacts their skill development, job satisfaction, and time savings, both personally and professionally.
What has AI done for us? Celebrating AI Appreciation Day


In the last few years artificial intelligence has found its way into more and more areas of our world and its progress shows no signs of slowing down.
Of course most things these days need a day to mark their achievements and today is AI Appreciation Day. So, what has AI done for us and what can we expect from it in future? Some industry experts gave us their views.
Google launches new AI security initiatives


Ahead of the summer’s round of cybersecurity conferences Google is announcing a range of new initiatives aimed at bolstering cyber defenses with the use of AI.
Last year the company launched Big Sleep, an AI agent developed by Google DeepMind and Google Project Zero, that actively searches and finds unknown security vulnerabilities in software.
93 percent of software execs plan to introduce custom AI agents


New research from OutSystems shows an increasing trend in agentic AI prioritization among software executives with 93 percent of organizations already developing -- or planning to develop -- their own custom AI agents.
IT leaders are under pressure to deliver measurable business value while managing constrained resources and aligning technology investments with long-term strategic goals. Introducing agentic AI helps address these demands by tackling challenges like fragmented tools, and limited ability to leverage data siloed across the organization.
International collaboration aims to combat deepfakes and AI misuse


There’s increasing concern about the use of deepfakes and artificial intelligence to spread misinformation and contribute to fraudulent activity.
Today at the AI for Good Global Summit in Geneva, the AI and Multimedia Authenticity Standards Collaboration (AMAS), a global, multistakeholder initiative led by the World Standards Cooperation has today launched two flagship papers offering recommendations to guide the governance of AI globally and combat mis-and-disinformation.
AI-generated deepfakes used to drive attacks


As generative AI tools have become more powerful, affordable and accessible, cybercriminals are increasingly adopting them to support attacks, these range from business fraud to extortion and identity theft.
A new report from Trend Micro shows that deepfakes are no longer just hype but are being used in real-world exploitation, undermining digital trust, exposing companies to new risks, and boosting the business models of cybercriminals.
Enterprise tech executives cool on the value of AI


Although enterprise AI investment continues to accelerate, executive confidence in the strategies guiding this transformation is falling according to a new report.
The research from Akkodis, looking at the views of 500 global Chief Technology Officers (CTOs) among a wider group of 2,000 executives, finds that overall C-suite confidence in AI strategy dropped from 69 percent in 2024 to just 58 percent in 2025. The sharpest declines are reported by CTOs and CEOs, down 20 and 33 percentage points respectively.
Recent Headlines
Most Commented Stories
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.