Is latency the cloud's Achilles heel? [Q&A]

Cloud maze complexity

The cloud is arguably the most transformative enterprise technology in the past two decades. Yet, as powerful as it is, it faces a huge latency problem -- and the big public cloud providers know it.

Their data centers can't be everywhere and close to every end-user, so the big distances created result in unavoidable latency.

On its own, the public cloud can't power smart cities, autonomous vehicles, primary storage or any other application that requires a predictable, high-speed response for large data sets that reside in many locations. A self-driving car can't wait seconds for data it sends to the cloud to be processed and then receive a response. That's why compute and storage resources are increasingly being moved to the edge to facilitate the faster response times that are required for these next-generation data demands. And as edge computing matures, technologists are poised to finally unlock the full potential of the cloud.

We spoke with Ellen Rubin, CEO and co-founder of ClearSky Data, provider of an on-demand primary storage service that includes backup and disaster recovery, to find out more about the rise of the edge and what it means for the cloud.

BN: Is latency hindering cloud adoption?

ER: It has certainly prevented the enterprise from using the cloud for applications that require strict performance levels. Latency is the Achilles heel of cloud adoption. It's unavoidable. Large providers such as AWS and Azure have giant data centers outside of metro areas, making it impossible to be near all users and all data being generated. For storing massive volumes of archival data at low cost, these economies of scale are ideal. But for today’s business and IoT applications, the resulting lag is simply unacceptable.

When transmitting data over distances greater than about 120 miles, the limitations imposed by physics mean you're going to see latency of greater than 10 milliseconds. That’s the point at which people can detect slow performance, and there are no pipes big enough to overcome the speed of light.

BN: What's the best way to eliminate this inherent latency in the cloud?

ER: Edge computing presents an excellent opportunity to bridge this on-prem to cloud gap. It is the latency killer because it keeps data close to the end user. The big cloud providers also understand this, especially when it comes to IoT. AWS and Azure have services for IoT that run at the edge but connect to the cloud. Additionally, AWS Outposts goes a step further, providing fully managed racks of AWS-designed compute and storage hardware running VMware, so customers can run AWS on-premises and then connect to its cloud. Finally, Google Cloud recently announced Stadia, an on-demand streaming platform that will deploy about 7,500 edge nodes around the globe to ensure that data and processing are as close to the players as possible.

These providers clearly know the cloud needs the edge for performance. But the edge also needs the cloud. IoT, enterprise storage, connected cars -- these use cases produce enormous amounts of valuable data, and it cannot be stored forever at the edge. The cloud is massively redundant with unlimited scalability, accessible from anywhere. It’s the perfect place to store, analyze and work with large amounts of data, so long as latency doesn’t pose an issue.

BN: We've talked mostly about the big public clouds, but there are many more cloud providers out there. How are they addressing the cloud latency issue? And what about the metro cloud?

ER: That's a great question because IoT has sparked a reawakening in the metro cloud.

There's a huge need for advanced cloud services at the edge within large cities, and traditional public cloud players fall short. The rush to fill that gap is leading to the rapid build-out of micro clouds at the edge, as well as major expansions for data center, co-location and connectivity providers who already have facilities inside large urban areas.

Equinix is a good example of how longtime co-location providers are taking advantage of this growing opportunity. They've tightly interconnected their co-locations around the world both to one another and to public clouds, so they’re perfectly positioned to provide edge services.

MetroEdge, another edge-based cloud services company we recently partnered with, is a good example of a micro-cloud company. They are building small, state-of-the-art data centers inside large cities, starting with a facility on the South Side of Chicago.

As IoT, connected cars and other emerging high-performance, data-hungry technologies proliferate, the importance of combining the edge with the cloud will only grow. If enterprises are to realize the benefits of the cloud, they'll need to take advantage of the edge in order to achieve the performance they require.

Image credit: Wavebreakmedia/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.