Cyber experts warn AI will accelerate attacks and overwhelm defenders in 2026

AI causing a data breach

Cybersecurity experts are offering warnings about the year ahead, and while they come from different backgrounds and companies, their predictions all point to AI changing the nature of attacks, speeding up intrusions and forcing defenders to rethink how they work.

Most analysts expect 2026 to be the first time that AI-driven incidents outpace what the majority of teams can respond to manually, and some see the biggest changes coming from attackers who use fully autonomous systems.

Loris Degioanni, CTO and founder of Sysdig, expects AI to become central to both sides of the fight: “For defenders, we’ll see end-to-end, agentic AI systems become standard for tasks like vulnerability management. We’ve already seen what’s possible: in the DARPA AI Cyber Challenge, an autonomous system uncovered 18 zero-day vulnerabilities in 54 million lines of code, and patched 61 percent of vulnerabilities in an average of 45 minutes without a single human in the loop. As for adversaries, we will see a surge in zero-days and automated exploitation in 2026 as weaponizing “dark AI” becomes the default method for attackers at scale. In turn, defenders will be forced to fight machine against machine.”

AI threats at scale

Another concern is the way attackers will use AI to operate at scale. Rajeev Gupta, Co-Founder and CPO at Cowbell, warns that: “While AI is revolutionizing cyber insurance, it’s also empowering cybercriminals. The same tools used to streamline underwriting and claims are being weaponized to launch automated, scalable cyberattacks. These attacks require no human oversight and can continuously crawl, exploit, and deploy malware across systems. With funding cuts to key cybersecurity agencies like CISA, the threat landscape could worsen, putting pressure on insurers to evolve.”

AI-enhanced extortion also appears on the horizon. Derek Manky, Chief Security Strategist and Global VP Threat Intelligence at Fortinet: “GenAI will accelerate data monetization and extortion: GenAI will become more central to post-compromise operations. Once attackers gain access to large datasets (through infiltration or by purchasing access on the dark web), AI tools will analyze and correlate massive volumes of data in minutes, pinpointing the most valuable assets for extortion or resale. These capabilities will enable adversaries to identify critical data, prioritize victims, and generate tailored extortion messages at scale. By automating these steps, attackers can quickly transform stolen data into actionable intelligence, increasing efficiency and profitability.”

Others think compute theft will become commonplace. Michael Clark, Senior Director of Threat Research at Sysdig, expects criminals to pursue raw processing power as AI workloads grow: “In 2026, compute power will become the new cryptocurrency. As AI models grow hungrier for processing resources, threat actors -- especially those facing sanctions or limited access to chips -- will begin hijacking infrastructure to train their own large language models (LLMs) and run autonomous AI agents. The Sysdig Threat Research Team (TRT) first observed LLMjacking in 2024, with attackers using stolen credentials to gain access to a victim's LLMs. This trend will transform from attackers compromising access for usage to stealing compute power. Enterprises should prepare to model GPU utilization and model training activity with the same vigilance they once held when watching network traffic for cryptojacking.”

Cyber weapons arms race

Some foresee wider political and social fallout. Bryan Cunningham, President of Liberty Defense, warns that deepfakes and autonomous agents could reach a much larger scale: “Both AI and QC will be used to create far more sophisticated cyber and critical infrastructure attacks. Deep Fake audio and video-enabled social engineering attacks will be commoditized and sold to enable anyone to conduct them. It also is likely that autonomous AI agents will be used to develop and deploy new attacks at mass scale and with little or no human involvement. Of course, AI and QC will also be used by all sides to identify and defend against evolving attacks. The US 2026 mid-term elections may well be the first in which widespread deep fakes are used to try and sway votes and possibly even to incite violence and chaos.”

Ransomware is another area expected to accelerate. Biren Patel, Senior Cyber Defender at Ontinue, describes shrinking timelines: “Most ransomware families can encrypt a system within about 15 minutes. In 2026, that window will shrink even further as attackers optimize their payloads. Organizations relying on manual investigation will not be able to keep up. Automated enrichment, agentic AI support, and rapid decision-making will become mandatory to stop ransomware before it spreads.”

Scams are also evolving. Alex Quilici, CEO of YouMail, expects AI-driven voice fraud to expand rapidly: “AI supercharges voice scams (including the ones that sound like you) -- Scammers used to need big call centers to run large-scale fraud but not anymore. AI will handle it all. Generative tools will write customized texts, voice scripts, and emails, and even respond to victims in real time. That will make scams faster, cheaper, and harder to trace. We’ll move from most robocalls connecting someone to a person to most robocalls connecting someone to an AI bot, at least at first. The good news is that the same AI techniques used by bad actors can also be used to detect patterns, flag impersonation, and shut down fraud at scale (if companies are proactive).”

Fortune 500 material breach

Other experts expect new forms of breach altogether. Jason Soroko, Senior Fellow at Sectigo, believes organizations will finally face one threat they’ve been warned about for years: “2026 will mark a milestone no one wants: the first publicly acknowledged Fortune 500 material breach caused by prompt injection. Companies will deploy LLM-integrated systems without guardrails, and adversaries will discover how to coerce those models into executing harmful internal commands or leaking sensitive data.”

Defenders are also preparing for heavy use of AI on their own side. Dan Zaniewski, CTO of Auvik, sees a change in everyday operations: “The next phase of AI in network operations won’t be about replacing humans but about operationalizing AI so it provides continuous, trustworthy assistance -- instruments that automate routine tasks while surfacing context and uncertainty for humans to act on. IT teams should be thinking about instrumenting telemetry, establishing fast feedback loops, and embedding AI-aware observability so AI becomes an operational advantage rather than an experiment.”

AI written code problems

Krishna Vishnubhotla, VP of Product Strategy at Zimperium, warns that AI-written code will change development at a speed that many teams are not ready for: “In 2026, the skills gap in mobile security will widen as AI-written code becomes the norm. AI will help developers move faster, but that introduces vulnerabilities at a scale most teams aren’t ready for. Organizations that succeed will adopt AI-driven security tools to detect issues quickly, triage intelligently, and fix problems before attackers exploit them. The skills gap won’t disappear, but AI-driven security can bridge it and keep mobile apps resilient as development speed accelerates.

He added, "The most underestimated mobile risk heading into 2026 is the speed at which AI is helping teams ship insecure code. Nearly half of AI-generated code contains security flaws, and sixty-eight percent of developers now spend more time fixing vulnerabilities than building new features. AI will improve, but not fast enough to keep pace to match adoption. Expect more vulnerabilities, not fewer. Organizations that stay ahead will continuously scan code and binaries, to protecting critical assets, because speed means nothing if what you ship isn’t secure.”

Ai writing code

Dipto Chakravarty, Chief Product Officer at Black Duck, expects longstanding approaches to security to be replaced altogether: “The traditional approach to vulnerability management and security testing will be disrupted, primarily driven by the increasing adoption of AI in cybersecurity. The old software world is gone, giving way to a new set of truths defined by AI. Threat actors will leverage AI to automate and scale attacks, while defenders will use AI to enhance detection and response capabilities.”

Tim Roddy, VP of Product Marketing at Zimperium, believes AI will take over early-stage work normally handled by junior analysts: “AI agents will begin to appear as assistants to pulling info from documentation, as assistants to flag anomalies requiring investigation and as Triage agents to analyze incidents and track the attack chain and implement response that is usually done by SOC personnel, often at the first level. This will speed up incident response times and resolution from days to hours and perhaps minutes. It will also reduce the need for entry level 1 analysts which will have employment impact and limit the pipeline to advanced level 3 analysts, which will be a long-term challenge for the security industry.”

Keeping up with AI attackers

Saeed Abbasi, Senior Manager of Security Research at the Qualys Threat Research Unit, says threat hunting will rely on AI simply to keep up with attacker speed: “Proactive threat hunting isn't about finding a threat 'never seen before.' It's about hunting for the behaviors and patterns that attackers reuse. Attackers don't innovate; they iterate. They find a weak product or a complex technology and brutally exploit that entire class of software until it becomes an industry-level liability.”

Alex Quilici, CEO of YouMail, also expects AI to reshape call security: “Future call-blocking solutions will not just detect suspicious calls but actively neutralize threats in real time using predictive AI models. This will include dynamic scoring systems for phone numbers and automated and rapid takedown of impersonation campaigns.”

Morey J. Haber, Chief Security Advisor at BeyondTrust, focuses on the need for organizations to anticipate change rather than react to it: “Cybersecurity has always been a forward-looking discipline. By anticipating where technology, threat actors, and regulation are heading, we can better protect our customers and help the industry prepare for what’s next. Looking ahead allows us to adapt faster and turn insight into proactive security action. The future of cybersecurity isn’t just about defending data, it’s about anticipating how digital and physical worlds will continue to collide.”

Several experts highlight the continued importance of human judgment. Dave Gerry, CEO of Bugcrowd, warns that overconfidence in AI can make incidents harder to interpret: “AI confidence can mislead -- In 2026, AI-generated outputs will continue to present information confidently, even when incorrect. As organizations rely on AI for efficiency, reports on threats or incidents may be confidently wrong, creating noise that security teams must cut through to identify real risks. Human oversight remains critical -- The rise of AI-driven hallucinations, deepfakes, and lifelike synthetic media will make it harder for non-technical users to discern reality from AI-generated content.”

AI will bring a critical thinking renaissance

Trey Ford, Chief Strategy and Trust Officer at Bugcrowd, expects a shift in how users approach information altogether: “AI will bring a critical thinking renaissance -- Overly eager to help GenAI wants to help, to the point that hallucinations and misleading responses are almost unsurprising. As deepfakes, AI-generated videos and images, and trending fake social media content continue to flood the internet, the need for critical thinking and deductive reasoning has never been more important. In 2026, users will thoughtfully question the content coming from their AI tools and social media trends, relying on classical thought patterns to navigate the content presented.”

Crystal Morin, Senior Cybersecurity Strategist at Sysdig, expects identity issues to become even more dangerous: “Identity will remain the primary cyberattack vector in 2026, and poorly managed machine identities could be the weak link that sparks the first globally disruptive AI-driven breach. Credential theft and account compromise will hit faster and harder than ever, targeting both human and machine identities. The proliferation of machine identities, often poorly managed, will only amplify the risk.”

Across all these predictions, the message remains much the same: AI will speed everything up -- attacks, defenses and mistakes -- and organizations will have to balance automation with human scrutiny if they want to stay ahead in 2026.

What do you think of these predictions? Share your thoughts on them in the comments

Why Trust Us

At BetaNews.com, we don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

betanews logo

We don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

x logo facebook logo linkedin logo rss feed logo

© 1998-2025 BetaNews, Inc. All Rights Reserved.