How GenAI adoption introduces network and security challenges [Q&A]

Enterprises are increasingly using GenAI to transform their organization. As they move ahead, they're evaluating their preparedness from a business, safety, skills, and product level. But there's another key factor at the backend that's being overlooked: the network.

Full GenAI adoption introduces significant new challenges and demands on the network, such as bandwidth strain and unique security vulnerabilities. If these demands aren't accommodated, organizations won't realize the benefits of GenAI.

We spoke to Ken Rutsky, CMO of Aryaka, to learn more about theses challenges and how enterprises can overcome them.

BN: How has GenAI adoption evolved over the past couple years?

KR: In the beginning, enterprises were using GenAI in more standalone ways. An organization might train a model, and then use that trained model to generate content, answer business questions, provide industry insights, and the like. But in this work, the level of both data and process integration was low. We can refer to this as the 'Explore' phase of adoption.

The next phase of GenAI was RAG (Retrieval Augmented Generation). Acronyms and fancy labels aside, fundamentally this is the step of integrating your enterprise data and external knowledge resources into the training, processing and output of an LLM model. By enhancing models with both historical and real-time data, we can drive new insights that are unique to our business and deliver even more value to our customers. This can be called the 'Enhance' phase of GenAI adoption.

Next, we start to see enterprises integrate LLM output into their business processes. For example, a customer service representative might use a well-trained LLM to shorten time to customer answers. This is a very common use case we have seen over and over again. Whether this process integration is automated or not, the key is we are integrating the LLM as part of our business workflows. I’d call this the 'Engaged' type of adoption.

Lastly, we may fully integrate a model both into our business data sources and into our processes. This is the north star because that means the LLM is not only providing us with insight, but it is also driving process effectiveness. We can call this the 'Expand' type of adoption, as the value can now expand across both process and data dimensions.

BN: How is GenAI adoption creating networking and security challenges?

KR: Security and networking challenges only increase as we integrate these GenAI applications with our processes and data. To solve these types of challenges, organizations will need to be on top of planning, governance, infrastructure and risk.

For instance, integrating data into a system can be valuable for an organization, but it significantly increases the risk of data and knowledge leakage due to potential exposure. Similarly, when an LLM delivers business insights over a network, that insight has value not just to the organization, but to cyber criminals and hackers as well. Even with AI, when it comes to network security and performance, there is no free lunch and organizations must be prepared from both an infrastructure and security perspective.

BN: Is GenAI creating network performance problems?

KR: That's a huge hurdle that a lot of businesses aren't quite prepared for. AI applications require far more resources to operate effectively compared to traditional apps. They move large quantities of data across long distances, and typically have to do it quickly to support rapid decision making. Doing this requires sufficient network bandwidth. Otherwise, AI apps will lag and stall. Alternatively, AI workloads might hog bandwidth from traditional, mission-critical apps, causing them to sputter. It’s important to remember that the two share the same underlying infrastructure.

Large language models, in particular, require massive data transfers, easily straining network capacity and hindering business delivery. Traditional networks are limited in handling this, leading to unreliable data movement and poor LLM execution.

Many enterprises that are unprepared from a network performance level are finding themselves desperate for a solution. For example, there are stories of enterprises shipping disc storage via FedEx and UPS to their datacenters across the country to help support AI data transfer demands. Of course, this is a very cumbersome and cost-ineffective approach that’s impossible to do effectively at scale.

BN: What's the best way organizations can overcome these networking and security challenges of AI?

KR: There are a lot of approaches and technologies at play. But to start, enterprises need a paradigm shift in how they treat networking and security architecture. Simply put, they must move to converge networking and security together as one to take on AI. There are so many different challenges, you simply can’t reliably solve them by relying on a hodgepodge of point solutions across two different silos.

Unified SASE is the ultimate evolution of networking and security convergence today. Unified SASE allows organizations to operate, analyze, and optimize every aspect of their networks from a single location. Rather than using a bunch of disparate solutions to manage and protect their network, they get all the functionalities they need -- from firewalls to secure web gateways -- all in one place. This brings needed simplicity to a very complex challenge (supporting GenAI).

BN: How do you see these challenges developing?

KR: The journey from exploration to expansion in GenAI integration is just beginning. As organizations deepen their engagement with these technologies, we can anticipate accelerated innovation across industries. The convergence of data, processes, and AI will likely lead to unprecedented efficiencies, insights, and business opportunities. However, this evolution will also demand a focus on security and networking. With adequate planning and the right technology and governance choices, we can deliver high value GenAI applications across global networks, safely, securely and with the performance and availability that users expect and have come to rely on.

Image credit: Thawatchai Chawong/Dreamstime.com

© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.