The future of storage and how data volumes are driving change [Q&A]

There have been rapid increases in storage capacity in recent years, but the way the technology is used is largely unchanged. We still load data from storage into memory, process it, and write out any changes.

But as storage grows into petabytes this model will become harder to sustain. The future of storage will require abstraction layering and heterogeneous computing, allowing for scale, but reducing over-sophistication.

To find out more about what storage will look like in future, we spoke to Tong Zhang, co-founder and chief scientist at ScaleFlux.

BN: There seems to be a dichotomy of storage capacity growing but data processing not keeping up. What does this mean for the future of storage?

TZ: The ever-increasing gap between data volume and CPU processing power is exactly the reason why computational storage has attracted so much attention over recent years. The slow-down of Moore's Law forces the computing industry to transition from traditional CPU-only homogeneous computing towards domain-specific, heterogeneous computing. This inevitable paradigm transition brings an unprecedented opportunity to re-think and innovate the design of future data storage devices/systems (especially solid-state data storage).

With the transition towards domain-specific, heterogeneous computing, the entire computing software ecosystem will become more and more ready to embrace computational storage that tightly integrates domain-specific computational capability into the data storage hardware. By sending computation closer to where data is located, instead of moving data towards the CPU (or GPU), computational storage could bring significant performance and power benefits to the overall computing system. Hence, the future of storage lies in the trend of integrating computational capability into storage devices.

BN: Storage architecture has remained mostly unchanged dating back to tape and floppy, how is this affecting future storage trends and do you see an evolution beginning to take place?

TZ: Storage architecture has remained mostly unchanged over the decades, mainly because the duty/function of data storage hardware has remained unchanged (i.e., store data and serve I/O requests). By fundamentally expanding the function of data storage hardware, computational storage will for sure commence a new chapter of data storage industry with many exciting new opportunities ahead.

BN: What are data-processing units (DPUs) and how are they impacting the performance of managing storage devices?

TZ: The term DPU essentially evolves from network processors. To avoid overwhelming host CPUs with network processing in the presence of ever-increasing network data transfer traffic, data centers have widely deployed SmartNIC (smart network interface card) that uses dedicated network processors to off-load heavy-duty network processing operations (e.g., packet encapsulation, in-transit data encryption, and more recently NVMe-oF) from host CPUs. To further enhance their value proposition, network processor chip vendors (e.g., Nvidia/Mellanox, Marvell, Broadcom) lately started to expand beyond the network domain into the storage (and even general-purpose computing) domain. Network processor chips are augmented with additional special-purpose hardware engines (e.g., compression, security), more embedded processors (e.g., ARM or RISC-V cores), and stronger PCIe connectivity (e.g., PCIe switches with multiple ports).

So, the term DPU was coined to distinguish from the traditional network processor. With many embedded processors, DPUs could off-load storage-related functions (e.g., storage virtualization, RAID and erasure coding) from host CPUs, leading to a better system TCO. Of course, deploying DPU into the computing infrastructure demands high development and integration cost with nontrivial modifications to existing software stack. Therefore, it will be at least two to three years before the real value and potential of DPUs is better understood.

BN: How is AI and machine learning affecting modern storage now? And in the future?

TZ: AI/ML will be one of the most important (if not the most important) drivers for the data storage from both the demand and innovation perspectives:

  • Higher demand for data storage capacity: In the presence of ever-increasing amounts of data being generated every day, AI/ML provides the means to make effective use of the data. As a result, people have better incentive to store data at least temporarily, which directly leads to a growing demand for data storage capacity.
  • Higher demands for storage system innovation: AI/ML training platforms mainly contain three components: training kernel computation, data pre-processing, and data storage. Most prior and current R&D activities focus on improving the efficiency of the first component (i.e., kernel computation), and as a result the efficiency of AI/ML training for kernel computation has significantly improved over the years. This, however, makes the entire AI/ML training system increasingly bottlenecked by the performance/efficiency of other two components (i.e., data pre-processing and data storage). For example, as recently reported by Facebook at HotChips'21, data pre-processing/storage account for over 50 percent of the total AI/ML training energy consumption. This demands re-thinking the design and implementation of data pre-processing/storage in AI/ML training platforms, for which computational storage could be a very appealing solution.

BN: Would it be practically and commercially feasible to move computation from host CPUs to storage hardware? How can this idea evolve from academic research papers into the mainstream market?

TZ: Indeed, compared with other heterogeneous computing solutions (e.g., GPU, TPU, SmartNIC/DPU, video codec), moving computation into storage devices through the I/O stack' suffers from a much higher abstraction-breaking cost. The on-going standardization efforts of the SNIA and NVMe communities will undoubtedly play crucial roles in curtailing the abstraction-breaking cost. Nevertheless, history teaches us that it will take a long time (five plus years) before the entire ecosystem could readily embrace any enhancements developed by SNIA/NVMe. Moreover, modifying various applications to take advantage of the underlying computational storage hardware is nontrivial and demands significant investment.

Therefore, to kick off the journey of commercializing the idea, we must drop the mindset of 'explicitly off-loading computation through the I/O stack' and instead focus on native in-storage computation that’s transparent to other abstraction layers. Eliminating the abstraction-breaking cost, transparent in-storage computation makes it much easier to establish a commercially justifiable cost/benefit trade-off.

Meanwhile, to further enhance the benefits, the transparent in-storage computation should have two properties: wide applicability, and low efficiency of CPU/GPU-based implementation. General-purpose lossless data compression is one good candidate here. Besides its almost universal applicability, lossless data compression (e.g., the well-known LZ77 and its variants such as LZ4, Snappy, and ZSTD) is dominated by random data access that causes very high CPU/GPU cache miss rates, leading to very low CPU/GPU hardware utilization efficiency and hence low speed performance. Therefore, native in-storage compression could transparently exploit runtime data compressibility to reduce the storage cost without consuming any host CPU/GPU cycles and without incurring any abstraction-breaking cost.

The benefit of native in-storage compression goes far beyond 'transparently reducing the storage cost'. The design of any data management systems (e.g., relational database, key-value store, and file system) is subject to trade-offs between read/write performance, implementation complexity, and storage space usage. In-storage compression essentially decouples the host-visible logical storage space usage from the physical storage space usage, which allows data management systems purposely trade the logical storage space usage for higher read/write performance and/or lower implementation complexity, without sacrificing the true physical storage cost. This creates a new spectrum of design space for innovating data management systems without demanding any changes to the existing abstractions.

The most plausible starting point of computational storage is a computational storage drive with built-in transparent compression. We are confident that transparent compression will carry computational storage from a concept into the mainstream market. Of course, the full potential goes far beyond transparent compression. As the ecosystem becomes more ready to embrace computational storage drives with more diverse and programmable computing functions we will see a large wave of innovation across the entire computing infrastructure.

Photo Credit: nmedia / Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.