Debunking four object storage myths

There is no arguing the fact that data continues to grow at an exponential rate, particularly unstructured data, which is predicted to represent 80 percent of all data by 2025. And because of trends likes IoT, the continued proliferation of mobile devices and the rise of remote work, this growing data is being used and accessed in new ways.

At the same time, it’s become increasingly clear that traditional SAN and NAS storage technologies cannot scale to meet these data growth demands nor support the need for data to be readily accessible from anywhere and at any time. To ensure these surging volumes of data are accessible and manageable, more and more enterprises are turning to object storage to meet their storage needs. However, there are still several misconceptions that persist about object storage. Here are four myths about the technology debunked.

Myth #1: Object storage is not suited for live or primary data


Primary storage is used for data that is actively used by an application (and SAN and NAS systems have traditionally been used here), while secondary storage is used for data protection. However, modern cloud and analytics applications present problems for traditional SAN and NAS systems used for primary storage, as they require storage that can scale way beyond the limits of these systems’ capabilities. In today’s economy, data powers the most innovative businesses. As customers look to monetize data, they need a storage solution that gives them cloud-like scale and flexibility while meeting their enterprise requirements around security, compliance and workflow integration.

The biggest misconception around object storage is that it’s only suited for backup and archive use cases in secondary storage. In reality, object storage is increasingly used as primary storage and is better suited for primary storage of unstructured data than SAN and NAS systems. Object storage is limitlessly scalable and has rich metadata, making it ideal for big data, healthcare, analytics and other distributed applications that require access to rapidly growing large data volumes.

Myth #2: Tape is cheaper than on-prem object storage solutions

It’s true that tape media is very cheap. It can retain data for 30 years under optimal conditions, and, unlike hard drives, tape can be easily transported for disaster recovery storage. For infrequently accessed data, tape remains the cheapest storage medium.

However, tape libraries and archive management systems are cumbersome to manage and operate.

Recovering specific data from a tape archive is time consuming and error prone. Tape media also needs to be regularly scrubbed to ensure data integrity and availability. Incompatibility between old tape media and new libraries means customers have to support multiple generation of tape environments, which significantly drives up costs.

Consider a local broadcaster that has over 30 years of programs archived on tape -- locating specific material from such a library is like trying to find a needle in a haystack. In a best-case scenario it can take hours to days to access data archived on tape. For object storage, it will take less than a second to access the same data due to object storage’s rich metadata.

Furthermore, there’s the issue of geo-distribution. Organizations can only access data stored on tape directly through a physical copy, while data in object storage can be shared and accessed across geographic regions regardless of where that data is physically located.

When these operational and capital expense factors are taken into consideration, the 10-year Total Cost of Ownership (TC0) for object storage is significantly less than tape for most data.

Myth #3: Cost per GB is the right metric to evaluate cost of object storage solution

Customers are used to comparing storage CAPEX on a cost per GB basis. Public cloud providers have priced their object storage solutions using the same metrics to show low base cost, but those price estimates neglect to include the access and egress fees as well as network charges for connectivity to the cloud.

The truth is that TCO is the best metric to evaluate an object storage solution -- and those access, egress and network costs have a significant impact on overall TCO for object storage in the public cloud. When those additional costs are taken into account, on-prem object storage is much less expensive, as there are no such fees.

Aside from lower costs, on-premises object storage provides greater performance and security than public cloud object storage. On-prem object storage enables faster data recovery and data restoration with low-latency local storage. Also, organizations that leverage on-prem object storage have comprehensive control of their data and can provide better security, privacy and compliance.

Myth #4: Flash is overkill for object storage

Flash is used in primary storage to deliver high performance, and most primary storage systems now offer an all-flash solution. Organizations using all-flash arrays for enterprise applications expect low latency and high Input/Output Operations Per Second (IOPS).

Object storage and all-flash arrays have long been perceived as mirror opposites. Object storage is known for its cost-effectiveness, high-capacity and extreme scalability but is perceived to lack cutting-edge performance. However, the landscape is shifting as object and all-flash storage are converging: Object storage vendors are enhancing performance by adding flash technology, while all-flash array vendors are trying hard to improve scalability. Object storage has an advantage here, as it’s much easier for object storage platforms to incorporate all-flash technology to achieve higher performance than it is for all-flash arrays platforms to fundamentally re-architect themselves to become more scalable.

The price of flash media is dropping at a steady rate. While flash storage will always cost more per GB than HDDs, the gap is shrinking due to new NAND technology (QLC) that allows for higher density (currently up to 64 TB) at lower cost. Object storage leveraging QLC NAND can deliver greater performance without the hefty prices traditionally associated with all-flash arrays. As a result, all-flash object storage can deliver predictable, consistent performance with extreme scalability, which is ideal to support advanced use cases such as analytics, IoT and streaming media.


Object storage has come a long way in the last five years. Common misconceptions -- that it’s only suited for secondary storage, that tape is cheaper for backup and archiving, that cost per GB is the best metric for evaluating the storage technology’s price, and that object storage can’t deliver  the performance and scale benefits through use of flash -- must be dispelled. By looking beyond these myths, enterprises can gain a clearer understanding of the technology’s potential to transform how organizations store, manage, protect and leverage their data.

Photo Creditnmedia / Shutterstock

Sanjay Jagad is Sr. Director of Products and Solutions, Cloudian. He has worked in the storage industry for almost 20 years helping customers build dynamic IT infrastructures that better align with business challenges, using the latest technologies around Flash Big Data, OpenStack, and Hybrid Cloud. He brings with him the industry knowledge and diverse experience in product marketing, product management (inbound and outbound), sales and channel enablement, business development, developing GTM strategies and development engineer (writing firmware in his early engineering days).

Comments are closed.

© 1998-2020 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.