Trends and opportunities in enterprise storage [Q&A]

Servers and Storage

The enterprise storage market has undergone significant change in recent years. In particular it's seen the rise of flash and the consequent decline of disk as a storage medium.

But what effect are these changes having on business? And what trends can we expect to see in the future? We spoke to Arun Agarwal, CEO of storage specialist Infinio, to find out his view of the market.


BN: How can companies extend their storage whilst making the best use of their existing infrastructure?

AA: The best way to extend the life of an array is with the addition of a server-side storage performance solution, like Infinio. Generally speaking, arrays need to be upgraded for one of two reasons: either they run out of storage performance (i.e. they can no longer keep up with applications), or they run out of storage capacity (i.e. they run out of space to store things).

What a solution like Infinio does is use low cost, commodity resources on the server (like its CPU and RAM) to create a caching and acceleration layer that prevents storage traffic from ever even going to the storage array. While this may seem like a solution only made for performance-triggered upgrades, that's not necessarily the case.

When an array no longer has to serve as much performance, you can configure it to be much more capacity efficient. In a nutshell, server-side solutions like Infinio can help avoid storage upgrades in a variety of scenarios.

BN: Flash storage is expensive. Is it possible to speed up performance without going to an all flash system?

AA: Yes. As discussed above, there are solutions that leverage server-side resources to prevent I/O from ever reaching the storage system. It is far preferable to entirely avoid serving an I/O from a storage system than it is to serve it from expensive all-flash media. Serving it from the server-side is both faster and lower cost.

BN: What are some of the common misconceptions surrounding storage performance?

AA: I think the industry has made storage performance about drive speed. This is reflected in the numerous comparisons and marketing around HDD performance vs. SSD performance. The reality is that many factors can affect storage performance, including things like CPU capacity on a storage controller or the capabilities of the storage network. What makes server-side technologies so powerful is that, by keeping data as close to the application as possible, you avoid most of these bottlenecks.

Another major misconception is that storage performance is only about IOPS (Input/Output Operations Per Second). Most storage vendors show a headline number of hundreds of thousands, or millions, of IOPS that their platform can provide, but vendors spend much less time talking about latency, another important dimension of storage performance. Administrators need to focus on both IOPS and latency when architecting a storage stack.

The thing we all tend to forget about the storage stack is that it's really just part of the memory hierarchy. People think of their processor, the L1 and L2 caches, and DRAM as the memory hierarchy, and have certain latency expectations there. However, what follows that is a huge dropoff in expectation for 'storage latency'. The reality is that it's all one system and we should aim for systems where all latency, including storage access, is in the microsecond range.

BN: Are hybrid arrays only a stop gap and will we inevitably see moves towards all flash storage?

AA: The logic of hybrid arrays -- a tuned ratio of SSDs and HDDs -- is sound, but the problem is that they miss a huge opportunity by putting the fast tier (the SSD cache) in the same box as the slow tier (the HDD). Solutions that locate the SSD caching layer server-side, closer to the applications, CPU and memory, provide much better performance.

So I do think that hybrid arrays are a stop gap, but not because the industry will move to an all-flash model. Instead, I think it's because the industry moves to a model of extremely fast I/O at the edge, and a very dense central core for capacity in the middle.

A logical question that follows this analysis is whether hyperconverged is the right architecture to provide the server-side performance we're talking about. While the storage resources are on the server improving performance, the reality is that data protection in a hyperconverged/distributed storage architecture has its own challenges. Most notably, both the performance and capacity implications of using distributed RAID render it impractical for most large-scale deployments.

BN: When is an all flash solution the best option?

AA: When evaluating an all-flash array as a potential solution, there are essentially two variables to keep in mind: the amount of storage capacity required and the working set size of the applications. Most customers spend a lot of time thinking about the former but they don't spend enough time understanding the latter.

The working set of an application is essentially the data that's regularly needed. That's the data you want on an SSD tier. If you have an application that has very high capacity needs but rarely accesses that data, an all flash array won't make sense because the working set would most likely fit on a flash tier in a hybrid array (or even better, on the server-side resources in an architecture like Infinio's).

If, on the other hand, you have an application where the overall capacity need and the working set size are similar, an all flash array could make sense. That said, such applications are rare as far as I have seen.

BN: What storage innovations are we seeing right now and what can we expect in the next few years?

AA: I think the really exciting thing going on, that people aren't talking about, is the new drive technologies. Remember, it was innovation in drive technologies that drove the whole flash revolution in the first place.

For example, recent press around Intel/Micron, 3D XPoint and other storage class memory (SCM) is absolutely worth following closely -- it's the next 10X when thinking about storage performance. Similarly, shingled magnetic drives (SMR) are the next 10X in capacity.

The drive technologies are what will lead the systems companies to build much faster, and much denser data center solutions for customers. The naïve view would be that applications won't even need another 10X of performance, or 10X of capacity, but if history has taught us one thing it's that the applications always find a way to catch up.

Photo Credit: Eugene Kouzmenok/Shutterstock

Comments are closed.

© 1998-2021 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.