Organizations overconfident in dealing with cybersecurity incidents

New research from Immersive Labs reveals a widening gap between confidence and capability in cybersecurity.
While nearly every organization (94 percent) believes it can handle a major incident, the data tells a different story. According to Immersive’s analysis, average decision accuracy is just 22 percent, and the average containment time is 29 hours.
“Readiness isn’t a box to tick, it’s a skill that’s earned under pressure,” says James Hadley, founder and chief innovation officer at Immersive. “Organizations aren’t failing to practice; they’re failing to practice the right things. True resilience comes from continuously proving and improving readiness across every level of the business, so when a real crisis hits, your confidence is backed by evidence, not assumption.”
Resilience Scores have remained statistically flat since 2023, and the median response time of 17 days to complete the latest cyber threat intelligence labs hasn’t improved despite increased spending and executive oversight. Confidence is climbing. Capability isn’t.
The study also finds that 60 percent of all training still focuses on vulnerabilities more than two years old, leaving teams overprepared for yesterday’s threats. When it comes to carrying out exercises for readiness only 41 percent of organizations include non-technical roles (such as legal, HR, communications, or executives) in simulations, even though 90 percent believe cross-functional coordination is strong.
Interestingly veteran practitioners outperform newcomers on known threats, achieving roughly 80 percent accuracy in classic incident-response labs. But when faced with AI-enabled or novel attacks, those same experts lag behind. Senior participation in AI-scenario labs dropped 14 percent year-on-year, exposing a growing adaptability gap as adversaries weaponize AI.
Of course AI can also be used defensively. “AI is great to support the defensive team, it can really help with decision making and it can be used to analyze gaps in knowledge, skills and judgment. It can help translate and simplify, synthesize key data points that you might then need to engage with the business,” says Dan Potter, senior director, operational resilience at Immersive, “We have to be able to, as organizations, future proof the skill set of our security operations team, of our business teams, around how to use AI, the risks and the opportunities it presents. What I mean by that is, we don't necessarily need to go and upskill everyone to be an LLM engineer or prompt injection expert, but we need to understand and recognize what's the right use case for AI, what are the increased risks that we see?”
You can find the full report on the Immersive site.
Image credit: ridofranz/depositphotos.com