Why and how organizations are modernizing their container deployments at the edge [Q&A]
Edge computing is aimed at bringing computing power closer to the source of data, such as IoT devices. It's increasingly being seen as an alternative to traditional data center and cloud models.
Stewart McGrath is the CEO at Section, a global Edge as a Service (EaaS) provider that helps organizations improve availability of their containerized application workloads in the cloud. We spoke to him to find out how and why companies are moving applications out to the edge.
BN: As organizations mature their containerized applications, should they also be revisiting their cloud strategy? Why does it matter and what are the options?
SM: People tend to think of the cloud as this diffuse global thing, but when it comes to deployment it's really not. In fact, most cloud instances are highly centralized and geographically limited -- that's the 'traditional' cloud model. When it comes time to deploy on AWS or Azure or GCP, you'll have to make very specific decisions about where your application workload is running. If you choose AWS EC2 US-East, for example, customers within or close to that region will enjoy a premium experience. As customers get geographically further away, application performance, responsiveness, resilience and availability will naturally degrade. Let's consider a simple example: deploying an application to the cloud with a user base that is equally distributed across the US Where do you host the application? On the east coast, knowing that performance for 50 percent of your user base located outside the east coast will suffer? Somewhere in the middle of the country, so that the experience for everyone is equally sub-par?
As your application and customer base matures, it's natural to want to look at moving those workloads geographically closer to users. That's what's accomplished with distributed (i.e., edge) computing.
BN: What are the pros and cons of a distributed edge versus a more traditional centralized cloud? Does the breadth of distribution matter?
SM: Well, I mentioned performance and responsiveness as benefits of edge vs centralized cloud, but the list is much longer than that. Edge deployment is simply better than centralized cloud deployments in most every important metric -- security, performance, scale, efficiency, resilience, usability, workload isolation, compliance… even cost. And yes, those benefits are often inextricably linked to the distributed nature of an edge deployment. So, generally the more broadly distributed an application, the better.
The natural question of course is, "If the edge is so much better then why doesn't everyone do it?" And the answer to that is complexity. In general, it's more challenging and costly to design, deploy and manage an edge network.
BN: Does use of Kubernetes and containers lend itself to one or the other?
SM: Containers in general, and Kubernetes in particular, certainly help cloud deployment but also tend to lend themselves to facilitating edge deployment, as the automated orchestration provided by Kubernetes, paired with the portability of containerization, provides an ideal coupling to deliver an edge platform. The lightweight portability of containers makes them ideally suited to distribution, while their abstraction facilitates deployment across diverse sets of geographies, providers, infrastructure, etc. Moreover, Kubernetes adds the orchestration needed to best coordinate this sort of distributed multi-region, multi-cluster topology. In simple terms, organizations that have already adopted containerized/Kubernetes environments are poised to readily and rapidly move to the edge.
BN: If distributed edge deployments are more complex, are they worth it? And can that complexity be reduced?
SM: If I can use the current, familiar tools and processes I'm used to, and if the underlying platform automates the network management, then all of a sudden the edge becomes as easy to adopt as a more traditional cloud environment.
Two factors are in play. The first involves the tools and processes that are used to run the application workload itself in a multi-cluster Kubernetes edge deployment. The second is the management and operation of the underlying edge network, which gets progressively more complicated and cost-sensitive as distribution (across regions and networks/operators) increases.
I'd also argue the benefits of the edge are enough to outweigh any concerns over complexity, but that's like arguing 'no pain, no gain' -- it can be true, but most people don't welcome pain. Fortunately, as you mention, that complexity can be eliminated by smart use of technology.
BN: What do organizations need to consider when moving containers to the edge?
SM: While a typical centralized cloud supports either single or multiple container clusters, edge deployment requires a multi-cluster topology. Our Kubernetes Edge Interface simplifies that by effectively turning the edge into a 'cluster of clusters' so that organizations can use standard tools and processes -- things like kubectl and Helm charts -- to manage those distributed clusters, but you still need to be aware that you'll be running a multi-cluster environment.
The other main consideration is who's responsible for managing the network? If you’ve got a team of engineers that are comfortable managing a distributed network, then have at it. If not -- or if you'd rather your team was focused on managing the application, not the network – then you'll want to look for an intelligent, automated edge platform.
Beyond just the human power, you'll also want to consider cost/performance efficiency as part of your equation. Advancements in edge platform technologies are enabling a more computational approach to edge location selection, workload orchestration, and traffic management. So, rather than relying on humans to select the most appropriate locations for their applications, teams can rely on 'intelligent' machines to make those decisions.
At a fundamental level, moving to the edge is a no-brainer for most organizations, particularly those that have already adopted Kubernetes -- they’re primed to rapidly adopt modern edge deployments for their application workloads. Even those that are still at the single-cluster stage are in position to rapidly leap-frog to the distributed edge.
In fact, recent research by SlashData on behalf of the Cloud Native Computing Foundation confirms this correlation between edge, containers, and Kubernetes, noting developers working on edge computing having the highest usage of both containers (76 percent) and Kubernetes (63 percent) of surveyed segments. In other words, most organizations that have adopted containers are already moving to the edge. If your organization isn't, it's falling behind.
Image credit: ra2studio/depositphotos.com