Agent vs agent, reliable interfaces and value for money -- artificial intelligence predictions for 2026

Artificial-intelligence

Artificial intelligence has been driving much of the technical agenda for the last couple of years and is still evolving rapidly, finding its way into more and more areas.

Here some industry experts look at what we can expect to see from the AI space in 2026.

Melissa Ruzzi, director of AI at AppOmni, thinks AI still has some way to go, “True AGI (Artificial General Intelligence) may not be achieved before the next decade, but as GenAI evolves, it may be called AGI (which would then force the market to create a new acronym for the true AGI). The big risk in AGI is similar to GenAI, where the focus on functionality clouds proper cybersecurity due diligence. By trying to make AI as powerful as it can be, organizations may misconfigure settings, leading to overpermissions and data exposure. They may also grant too much power to one only AI, creating a major single point of failure.”

“As employees increasingly use trusted internal AI agents for drafting and managing emails, attackers will use their own AI agents to jailbreak and manipulate communications and insert themselves into discussions,” says Pattricia Titus, field CISO at Abnormal AI. “This agent vs. agent battle will exploit the high level of trust employees place in their seemingly secure corporate AI environment, leading to an increase in people unknowingly giving up ‘crown jewels’ like bank routing information, or granting unfettered access through credential harvesting.”

Workflows have historically been in the hands of experts in order to reduce human error. Yaz Bekkar, senior solutions architect at Barracuda, says, “Those workflows nowadays are disappearing and being replaced simply by artificial intelligence, and artificial intelligence cannot question itself unless you have multiple modules of AI. If you use the same AI to question itself, it will always say, ‘Yeah, I am right’. But if you put an AI module and top of it another AI that isn’t totally segmented, and unfortunately is using the same component of AI reviewing itself, then very often the result is equally false and unfortunately, being implemented in organizations.”

Glenn Nethercutt, chief technology officer at Genesys, says, “2026 will be the year AI stops observing and starts operating. Large Language Models (LLMs) gave machines expression with the ability to interpret, converse and contextualize. But expression without execution is reaching its limit. The next generation of systems will move with purpose, transforming understanding into action through Large Action Models (LAMs). This signals the rise of agentic intelligence -- AI that engages with the world instead of merely representing it. Powered by LAMs, customer experience will cease to operate as a chain of responses and begin to function as a forward-looking control system. The next generation of AI will change state before friction emerges: routing, advising, escalating, and intervening based on intent signals, micro-context, and temporal patterns instead of after-the-fact defects. Journeys will be shaped in advance rather than repaired retroactively. The competitive axis moves from ‘how well did we respond?’ to ‘how early did we act?’ and the leaders will be the organizations whose intelligence anticipates reality faster than it arrives.”

Andrew Hillier, CTO and co-founder of Densify, says:

In 2026, we can expect a significant shift in how companies approach AI infrastructure. The models have become smart enough that most organizations won't need to train their own -- they can leverage off-the-shelf models and point them at their data. This means concentrations of workloads will be more inference-focused, which changes dynamics about GPU availability, efficiency and performance.

The training versus inference split will restructure infrastructure requirements. Training is less response sensitive -- you can run it tomorrow if resources aren't available. But inference needs to be online and responsive because users expect immediate answers. As a result, companies will prioritize high availability and low latency for their inference workloads.

Reliable interfaces will be essential says Sonder Ager-Wick, director of user experience at Qt Group. “The last few years might have centered around the latest and greatest novelty developments, but in 2026 the pendulum will swing firmly back to the basics - trust and reliability. We’ll start to see AI that can interpret user intent, regardless of modality, understand context and pull information with ease. But, unless it’s a smooth and natural experience that the user feels in control of, it could get stuck at launch.”

Octavian Tanase, chief product officer at Hitachi Vantara, predicts that, “Agentic AI will see a tremendous surge in enterprise adoption in 2026. People will be able to create autonomous modules seamlessly embedded in business workflows that can make decisions on their behalf. While enterprises have already begun to experiment with generative AI, these agentic systems will go further by enabling self-directed automation and decision-making at scale, making real impact on businesses while underscoring the need for governance, security and trust in AI systems.”

Teams will go and find their own AI tools believes Jon Abbott, co-founder and CEO of ThreatAware:

AI is going to be a major problem in 2026, but not in the way most people think. If you don't provide your team with an enterprise version that keeps data safe, they'll use personal equipment and public AI services anyway. You've essentially banned a tool they need to do their jobs effectively.

Your team deserve access to AI; it's an incredible productivity tool. But they need enterprise-grade infrastructure that protects your data whilst removing their desire to work around your security policies. This is about enabling productivity whilst maintaining security, not choosing between them. IBM's research shows that AI-associated data breaches caused organisations more than $650,000 per breach, with the financial implications often stemming from shadow AI exposure.

Ryan Manning, chief product officer at BMC Helix, thinks it will be important that AI is seen to pay its way. “We are officially exiting the experimental phase of AI and entering an ‘era of extreme accountability’ for tech investments. By the end of 2026, the single most critical metric for C-level executives won't be innovation or speed, but operating costs. With millions already approved for AI projects, investors and CFOs will demand a freeze on additional spending, forcing IT leaders to justify their budgets solely by how well they maximise existing resources. The question from the board is no longer ‘What can AI do?’ but ‘How will this deliver ROI?’”

This is echoed by Jaimie Tilbrook, CPO of EncompaaS, “A major disruption we’ll see in 2026 is that companies will decide to move away from the hype of instant AI transformation and toward guaranteed incremental gains. Instead of trying to deploy a large-scale AI implementation with a high degree of uncertainty of success, companies will instead invest in smaller AI projects with a 100 percent success rate.”

Uzi Dvir, global chief information officer at WalkMe, says, “In 2026, enterprise AI strategy will shift from ‘build’ to ‘buy’. Enterprises are realising that modernising IT is no longer about what you want to do, but how you do it. Many are finding that building AI was the best move to keep up with the rapidly developing technology, but as AI matures and commoditizes, buying proven solutions is simply more efficient than creating them in-house -- especially with more boards pushing for measurable ROI. CIOs will increasingly choose platforms that deploy quickly, integrate cleanly, and help deliver consistent value across the business.”

And of course AI remains a double-edged sword. Tim Burke, CEO of Quest Technology Management, says, “2026 will mark a turning point where AI is fully weaponized by attackers. Mid-market firms need to anticipate automated phishing, adaptive malware, and deepfake-enabled fraud. The key is not just detection but embedding AI-aware threat modeling into every layer of IT operations. Assume you will be compromised and what will you have in place to continue in operation.”

Though Mike Rinehart, VP of AI at Securiti AI, highlights its role in defense too, ““In 2026, one of the most important shifts in cybersecurity won’t be the new attack techniques, but how AI will enable teams to build and test defenses without access to real customer data - a long-standing limitation that AI is finally helping to overcome. Security teams have always worked at a disadvantage because the data they need to train and test systems is the data they can’t access. What’s changing is that newer AI models can make sense of unfamiliar enterprise data without having been trained on it directly. That’s going to matter far more than chasing the next unpromising headline about AGI.”

How do you see the AI landscape in 2026? Let us know in the comments.

Image credit: Wrightstudio/Dreamstime.com

Why Trust Us

At BetaNews.com, we don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

© 1998-2025 BetaNews, Inc. All Rights Reserved.