Why the future of AI isn’t about better models -- it’s about better governance [Q&A]

Enterprise artificial intelligence AI

The rise of generative and agentic AI is transforming how data is accessed and used, not just by humans but by non-human AI agents acting on their behalf. This shift is driving an unprecedented surge in data access demands, creating a governance challenge at a scale that traditional methods can’t handle.

If organizations can’t match the surge in access requests, innovation will stall, compliance risks will spike, and organizations will reach a breaking point. Joe Regensburger, VP of research at Immuta, argues that the solution isn’t more powerful AI models; it’s better governance. We talked to him to learn more.

Continue reading

AI-powered attacks, zero-days, and supply chain breaches -- the top cyber threats of 2025

Enterprise cyberattack

New analysis of recent high-profile breaches and global threat patterns, reveals a cybersecurity landscape dominated by AI-enhanced attacks, organized cybercrime, and rapid exploitation of zero-day vulnerabilities.

The research, from compliance automation platform Secureframe, shows critical infrastructure, healthcare, and financial services have become primary targets as threat actors evolve faster than traditional defenses.

Continue reading

New agentic AI platform helps teams fix cloud security problems faster

Cloud security padlocks

Security teams are often hampered by having to identify and fix issues while weeding out false positives. This is an area where AI can help and Sysdig has launched a new agentic platform designed to analyze cloud environments end-to-end and uncover hidden business risk so organizations can remediate crucial threats fast and deliver measurable improvements in their security posture.

Sysdig Sage, the company’s AI cloud security analyst, ultimately understands context from the entire business and provides clear, contextual remediation recommendations, reducing an organization’s exposure time to critical vulnerabilities.

Continue reading

Hackers weaponize GenAI to boost cyberattacks

AI security attack

Adversaries are weaponizing GenAI to scale operations and accelerate cyberattacks -- as well as increasingly targeting the autonomous AI agents reshaping enterprise operations. This is among the findings of CrowdStrike’s 2025 Threat Hunting Report.

The report reveals how threat actors are targeting tools used to build AI agents -- gaining access, stealing credentials, and deploying malware -- a clear sign that autonomous systems and machine identities have become a key part of the enterprise attack surface.

Continue reading

Why an adaptive learning model is the way forward in AIOps [Q&A]

AIOps

Modern IT environments are massively distributed, cloud-native, and constantly shifting. But traditional monitoring and AIOps tools rely heavily on fixed rules or siloed models -- they can flag anomalies or correlate alerts, but they don’t understand why something is happening or what to do next.

We spoke to Casey Kindiger, founder and CEO of Grokstream, to discuss new solutions that blend predictive, causal, and generative AI to offer innovative self-healing capabilities to enterprises.

Continue reading

Attacks evolve too quickly for businesses to maintain truly resilient security

Enterprise cyberattack

As organizations embrace digital transformation and AI, security teams face mounting pressure to defend an ever-expanding attack surface according to a new report.

The research from Cobalt suggests traditional reactive security measures cannot keep pace with modern threats, particularly when adversaries leverage automation and AI to scale their attacks. 60 percent of respondents believe attackers are evolving too quickly for them to maintain a truly resilient security posture.

Continue reading

Autonomous DLP platform aims to fight insider threats

Insider Threat

Security operations teams often struggle with complex tools, legacy pattern-matching DLP, manual policy tuning, and alert fatigue. This can slow investigations, increase overhead, and reduce security effectiveness.

While traditional DLP solutions aim to tackle these challenges, they require constant human intervention, generate high false positive rates, and often miss sophisticated threats that bypass simple pattern recognition. That’s why Nightfall is launching an autonomous Data Loss Prevention platform.

Continue reading

New AI approach aims to cut disruption from data interchange errors

Human error head hands

Electronic data interchange (EDI) is the lifeblood of modern business, but even a small error -- be it a connection failure, data quality issue, transformation failure, or data transmission issue for example -- can rapidly cascade, generating hundreds or even thousands of issues.

This can become a domino effect tipping over into longer root cause identification, inefficiency in managing a raft of open tickets, and a prolonged time to resolution. These factors can increase operational risk, leading to downstream supply chain issues that can jeopardize valuable business relationships.

Continue reading

Navigating the hidden dangers in agentic AI systems [Q&A]

Agentic AI

According to Gartner 33 percent of enterprise applications are expected to incorporate agentic AI by 2028, but are their security teams equipped with the latest training and technology to protect this new attack surface?

We spoke with Ante Gojsalić, CTO and co-founder at SplxAI to uncover the hidden dangers in agentic AI systems and what enterprises can do to stay ahead of the malicious looking to exploit them.

Continue reading

AI emerges as a cybersecurity teammate

AI handshake

On its own artificial intelligence isn’t a solution to cybersecurity issues, but new data from Hack The Box, a platform for building attack-ready teams and organizations, reveals that cybersecurity teams are increasingly beginning to adopt AI as a copilot for solving security challenges.

Based on real-world performance data from over 4,000 global participants in Hack The Box’s Global Cyber Skills Benchmark, a large-scale capture the flag competition, the report highlights how cyber teams are starting to use AI as a teammate to their security staff.

Continue reading

Just six percent of CISOs have AI protection in place

AI robot security

While 79 percent of organizations are already using AI in production environments, only six percent have implemented a comprehensive, AI-native security strategy.

This is among the findings in the new AI Security Benchmark Report from SandboxAQ, based on a survey of more than 100 senior security leaders across the US and EU, which looks at concerns about the risks AI introduces, from model manipulation and data leakage to adversarial attacks and the misuse of non-human identities.

Continue reading

Consumers are putting more trust in AI searches

AI search

A new survey of over 2,000 consumers across the US, UK, France and Germany looks at how people are adopting, and trusting, AI tools to discover, evaluate, and choose brands.

The study from Yext finds that 62 percent of consumers now trust AI to guide their brand decisions, putting it on par with traditional search methods used during key decision moments. However, 57 percent still prefer traditional search engines when researching personal, medical or financial topics.

Continue reading

New AI-driven features set to help security remediation efforts

AI protection security

Security teams today are overwhelmed by fragmented data, inconsistent tagging, and the manual burden of translating findings into fixes.

A new release of the Seemplicity platform introduces an AI Insights feature along with Detailed Remediation Steps, and Smart Tagging and Scoping, three new capabilities that use AI to solve some of the most painful and time-consuming cybersecurity tasks.

Continue reading

Organizations embrace AI but lack proper governance over development

data governance

According to new research 93 percent of firms in the UK today use AI in some capacity, but most lack the frameworks to manage its risks and don’t integrate AI governance into their software development processes.

The study from Trustmarque shows only seven percent have fully embedded governance frameworks to manage AI risks. In addition a mere four percent consider their technology infrastructure fully AI-ready, and just eight percent have integrated AI governance into their software development lifecycle.

Continue reading

The impact of AI -- how to maximize value and minimize risk [Q&A]

Artificial-Intelligence-Convenience-at-the-cost-of-privacy

Tech stacks and software landscapes are becoming ever more complex and are only made more so by the arrival of AI.

We spoke to David Gardiner, executive vice president and general manager at Tricentis, to discuss to discuss how AI is changing roles in development and testing as well as how companies can maximize the value of AI while mitigating the many risks.

Continue reading

Load More Articles