Why latency is the big elephant in the room for the cloud [Q&A]
Yes, the cloud seems to be crushing it, but according to Ellen Rubin, CEO and co-founder of ClearSky Data, as many as 50 percent of cloud customers have brought workloads back on-premise due to latency and performance issues in production applications for dispersed workforces. This makes it a pressing issue for dispersed teams, and threatens forward momentum.
I spoke with Ellen about the problem, how latency affects businesses, and what CIOs can do to address the challenge.
BN: What is the difference between storage and network latency?
ER: Storage latency concerns the response time of physical storage media, such as disk drives, solid state devices (SSDs), non-volatile memory express (NVMe) solutions and more. Network latency is a measure of the response time of the networks used to reach that storage. As storage latencies have gone down, the network latencies associated with reaching cloud-scale data centers and remote cloud storage have become more significant. When enterprises work with high-performance business applications or attempt to increase their public cloud use, it’s often network latency issues that create roadblocks and hold those companies back.
BN: How does network latency affect your business?
ER: Network latency can hamstring cloud deployments, which are a top priority for many CIOs. While the public cloud market is predicted by Forrester to reach $160 billion by 2020, and the cloud offers vast economic benefits for enterprise IT, IDC reports that up to 50 percent of cloud customers have brought workloads back on premises due to network latency and performance issues.
This level of interference simply shouldn’t be acceptable to enterprise storage and cloud users. Just as a Netflix customer wouldn’t agree to wait hours to stream a movie, enterprises running apps using the public cloud should be free of lags and delays that can slow business and cause missed opportunities. Gartner recently released a manifesto for solving this problem by bringing data, content, resources and compute to the edge of networks -- using a content delivery model to augment the traditional data center and create an optimized user experience.
BN: How do CIO's today address the network latency challenge? Or do they?
ER: CIOs have long tried to resolve network latency issues. Some companies have chased physical solutions, which involve running fiber-optic cables directly between locations where data will transfer, in order to establish a direct, private line. This option was detailed in Michael Lewis’ "Flash Boys" -- a financial services exchange built an 827-mile cable, which traveled from Chicago to New Jersey and passed through mountains and rivers, in order to shave milliseconds off its network latency.
Although the “Flash Boys” example is an extreme case, the exchange isn’t the only company to exhaust its search for a network latency workaround. Some organizations beef up network infrastructure, rebuild applications for the cloud or leverage cloud exchanges and network hubs in colocation sites to route traffic more efficiently. However, such options still require significant monetary commitments, and each one comes with its own set of limitations.
BN: Why haven’t storage pros been able to resolve the network latency issues that occur when they move large amounts of data to the cloud?
ER: Storage pros are beginning to solve cloud-related network latency issues, as a viable solution has come into view -- in a sense, applying the Netflix model to enterprise storage. Content delivery networks (CDNs), pioneered by companies like Akamai, use metro-based connectivity points to overcome network latency. Such networks host data in close physical proximity to the customers using it -- for example, if you live in California, your Netflix data is likely hosted in a nearby metro area, rather than one in Virginia. As a result, you’re able to stream media instantly.
Gartner’s recent “edge manifesto” lays the framework for using the CDN model to improve data center services and cut network latency. Similar to the way CDNs expanded the landscape for content delivery without replacing the public internet, it’s time for specific enterprise functions and services to move to the edge, evolve with the digital business climate and simplify companies’ paths to the cloud.
BN: What type of applications demand the lowest network latency?
ER: Any business application that demands fast, secure and reliable data access -- such as those working with machine data analytics, security analytics or operational analytics -- will need low latency in order to be successful. Meanwhile, business applications that don’t require working with hot data (or primary workloads), such as archiving, backup and disaster recovery, can function without the strict high-performance and low-latency requirements of other functions.
BN: What ROI can enterprises achieve with lower latency?
ER: Like many IT functions, the ROI of reduced network latency is most clearly measured when applied to a specific use case. For example, many enterprise users struggle with machine data -- the fastest-growing segment of data in the enterprise, projected by IDC to make up 42 percent of all data by 2020. Companies that analyze this data, using applications like Splunk, recognize it holds a gold mine of opportunities and insights. But, many can’t scale the low- latency, high-performance storage infrastructure required to match the rate of machine data growth and allow these applications to perform well. As a result, up to 50 percent of Splunk project costs are sunk into storage infrastructure. Using a dedicated, low-latency network, the storage costs can be re-applied to actual Splunk analytics instead of infrastructure required to support them.
In some cases, the price of network latency depends on the industry -- for that financial services exchange detailed in “Flash Boys,” reducing latency from 17 to 13 milliseconds was enough to give the company’s trades a competitive edge. However, for every organization in every industry, reduced network latency can open doors to new cloud projects and flexibility, which can help companies cut operational and infrastructure costs across the board.